New for SQL Server 2022: Use GENERATE_SERIES to generate a series of numbers between two values

New for SQL Server 2022: Use GENERATE_SERIES to generate a series of numbers between two values

There are several different ways to generate a table or series of numbers between two given values. Say that you wanted to populate a table with an integer column and the values should be sequential integers between 1,000 and 1,000,000. Below I cover three different methods of accomplishing this task. The last method covers using the GENERATE_SERIES function that is new to SQL Server 2022.

Method 1: Using a recursive CTE

One way that you could do this is by using a recursive CTE (common table expression) like this:

--*************************************************************************
-- All numbers between 1000 and 1,000,000
-- using a recursive CTE
--*************************************************************************
declare @Start int = 1000;
declare @End int = 1000000;

with cte as 
(
    select @Start as [Num]
    union all
    select Num + 1 from cte where [Num] < @End
)
select 
 [Num]
from cte 
option (maxrecursion 0);


This produces the following result:

Method 2: Harnessing the power of a Cartesian Join

Another way that you could accomplish this is by harnessing the power of a Cartesian join – yes, that thing that you usually do by accident!

--*************************************************************************
-- All numbers between 1000 and 1,000,000
-- without a recursive CTE
--*************************************************************************
declare @Start int = 1000;
declare @End int = 1000000;

with cte as 
(
        SELECT ones.n + (tens.n * 10) + (hundreds.n * 100) + (thousands.n * 1000) + (tenthousands.n * 10000) + (hundredthousands.n * 100000) + (millions.n * 1000000) + (tenmillions.n * 10000000) as [Num]
        FROM  ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) ones(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) tens(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) hundreds(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) thousands(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) tenthousands(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) hundredthousands(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) millions(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) tenmillions(n)
			where ones.n + (tens.n * 10) + (hundreds.n * 100) + (thousands.n * 1000) + (tenthousands.n * 10000) + (hundredthousands.n * 100000) + (millions.n * 1000000) + (tenmillions.n * 10000000) <= @End
)
select 
 [Num]
from cte 
order by [Num] asc;


This produces the same result as the first method:

Method 3: Using GENERATE_SERIES instead:

But new in 2022, SQL Server now has a GENERATE_SERIES function that you can use to produce the same results with much less code.

GENERATE_SERIES generates a series of numbers within a given interval. The interval and the step between series values are defined by the user. By using this built-in function you can generate the same results as above with a single line of code:

 
select value as [Num] from GENERATE_SERIES(1000, 1000000, 1);
 
 

And finally we see the same result as the first two methods… just with MUCH less typing:

Arguments passed to this function:

start

The first parameter that you pass to the function is start. The start parameter specifies the start value of the series that will be generated by the function. In our example above we specified 1,000 as the start because we wanted to generate a series of numbers between 1,000 and 1,000,000, making 1,000 the starting number. This parameter can be specified as a variable, a literal, or a scalar expression of type tinyintsmallintintbigintdecimal, or numeric.

stop

The second parameter that you pass to the function is the stop. The stop parameter specifies the end value of the series that will be generated by the function. In our example above we specified 1,000,000 as the end because we wanted the series to stop at one million. This parameter can be specified as a variable, a literal, or a scalar expression of type tinyintsmallintintbigintdecimal, or numeric. The series stops once the last generated step value exceeds the stop value.

The data type for stop must match the data type for start.

step ]

The last parameter that is passed to the function Indicates the number of values to increment or decrement between steps in the series. step is an expression of type tinyintsmallintintbigintdecimal, or numericstep can be either negative or positive, but can’t be zero (0). In the example above we wanted to have sequential number without any gaps. But if we wanted only even numbers we could have specified “2” as our step value and then each additional number would have stepped up by a value of 2.

This argument is optional. The default value for step is 1 if start is less than stop, otherwise, the default value is -1 if start is greater than stop.

If start is less than stop and a negative value is specified for step, or if start is greater than stop and a positive value is specified for step, an empty result set is returned.

Much simpler! You can find the Microsoft page for this function here

How to Use Secure Strings in PowerShell Scripts

How to Use Secure Strings in PowerShell Scripts

In PowerShell, a secure string is a way to store sensitive information, like passwords, in a more secure manner. Secure strings encrypt the data in memory so it’s not stored in plain text.

On occasion you may have the need to write a PowerShell script that will perform an action that requires it to programmatically login to another application by passing a username and password. When you do this it is a bad idea to hard-code sensitive information (like passwords) into the script because anyone who can access the script would also be able to see the password.

So what do you do instead?

Below we will walk through one option that involves converting the password to a secure string and storing it in a text file for reuse later on.

One last caveat before we get into the details: If you are working with any seriously secure data you should be absolutely sure that you fully understand all security risks. Bad people can, unfortunately, be very smart and I am sure that there are risks with ALL methods (this one included). There are people who devote their entire career to keeping bad people out of sensitive information, and they should be consulted if you are trying to secure nuclear codes… or perhaps your grandmother’s secret paprikash recipe.

Save a secure string to a file for later reuse:

"Sup3rS3cr3tP@ssw0rd!" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "C:\Temp\MySuperPassword.txt"

This will convert the plain text password to an encrypted string of mashed up characters and store it in a text file for reuse later on.

Important to note: The encrypted string is tied to the user and machine that originally encrypted it. This means that you will only be able to decrypt it from the same machine that you originally encrypted it on. This also means that a different user would not be able to decrypt it from any machine.

Retrieve the password from the encrypted file:

ConvertFrom-SecureString -SecureString (Get-Content "C:\Temp\MySuperPassword.txt" | ConvertTo-SecureString) -AsPlainText

This will decrypt the mess of mashed up characters in the text file and give you back a plain text string.

Of course decrypting the string back to plain text defeats some of the purpose of encrypting it in the first place. The better option, if your application is compatible, is to create a PowerShell credential and then pass the credential to the application when logging in. This would keep sensitive password secure as it would not be converted to plain text. Here is an example of creating a credential and passing it to an Invoke-SqlCmd command:

$SecStr = Get-Content "C:\Temp\MySuperPassword.txt" | ConvertTo-SecureString

$Credential = New-Object System.Management.Automation.PSCredential ("MyUserName", $SecStr)

Invoke-SqlCmd -HostName "MyServerName" -Database "master" -Credential $Credential -TrustServerCertificate -Query "select * from sys.databases" | Out-GridView

Using this method keeps the string from being converted back to plain text which decreases the opportunity for unscrupulous people to do nefarious deeds.

One last thing:

You could also capture sensitive information at runtime and store it in a in-memory secure string like this:

$SecStr = Read-Host "Enter your password" -AsSecureString

This will cause the script to pause and wait for input. Whatever the user types in will be stored in an in-memory secure string which can then be passed to a credential or used for other purposes.

Understanding the Risks of SQL Server NOLOCK

Understanding the Risks of SQL Server NOLOCK

Admittedly, I use NOLOCK all the time in my queries. But in my defense, most of the queries that I write ad-hoc are returning information that I’m not that concerned about. Using the NOLOCK table hint in SQL Server can have significant implications, both positive and negative, depending on the use case. While it is commonly used to improve performance by avoiding locks on a table, it has several risks and potential downsides that make it unsuitable in many scenarios. Here’s an overview of why you shouldn’t use NOLOCK indiscriminately:


1. Dirty Reads (Uncommitted Data)

  • Risk: The NOLOCK hint allows the query to read data that is not yet committed by other transactions. This means you may read data that could be rolled back or invalid, leading to inaccurate or inconsistent results.
  • Example: If a transaction updates a table and then rolls back, your NOLOCK query might still read the uncommitted (and ultimately invalid) changes.

2. Non-Repeatable Reads and Phantom Reads

  • Risk: Data can change while the query is executing. For example, rows might be updated or deleted after your query has read them, or new rows might be added during your query, leading to inconsistent results.
  • Impact: If consistency is critical, these issues can cause unexpected behavior, especially in reports or calculations.

3. Incorrect Query Results (Missing or Duplicate Rows)

  • Risk: SQL Server may read pages that are in the process of being updated or moved. This can result in:
    • Missing rows: Rows might not be visible because they are being moved to a new page due to an update.
    • Duplicate rows: Rows might appear twice if your query reads the same row from both the old and new pages during a move operation.
  • Example: In a clustered index, rows might be moved during an update, and NOLOCK could result in reading both the old and new versions of the data.

4. Schema Instability

  • Risk: If the table schema is being modified (e.g., an index is being rebuilt), a NOLOCK query might fail or return inconsistent results.

5. Undocumented Behavior

  • Risk: The behavior of NOLOCK is not always predictable in highly concurrent environments. For example:
    • It might return data from a heap table that is in an intermediate state, resulting in corrupted or invalid results.
    • It may behave differently depending on the SQL Server version and the underlying physical storage engine.

6. False Sense of Performance Improvement

  • Risk: While NOLOCK can reduce blocking and improve performance in high-concurrency environments, it often masks underlying issues like:
    • Poor indexing.
    • Inefficient query design.
    • Overly long transactions.

Rather than using NOLOCK, it’s better to address the root causes of performance issues.


7. Inappropriate for Critical Operations

  • Risk: For operations where data integrity and accuracy are essential (e.g., financial reports, audits, inventory counts), using NOLOCK can lead to incorrect decisions or outcomes based on invalid data.

When Should You Use NOLOCK?

There are rare cases where NOLOCK might be acceptable:

  • Non-critical queries, such as exploratory queries or when generating reports that tolerate occasional inaccuracies.
  • Situations where performance is more important than 100% accuracy, and you explicitly understand and accept the risks.

However, even in these cases, consider alternatives like READ COMMITTED SNAPSHOT ISOLATION (RCSI) or query tuning to achieve better performance without compromising accuracy.


Conclusion

While NOLOCK can improve performance by reducing blocking, it comes with significant risks to data accuracy and consistency. It should be used only after understanding and accepting its implications, and only when alternatives like query tuning or isolation-level changes are insufficient or impractical. For critical systems or operations, avoid using NOLOCK to ensure reliable and accurate results.

Now, we should all go look at Brent Ozar’s post about NOLOCK here, where he gives examples of how to produce those bad things that I mentioned above (Brent is a SUPER smart guy by the way!).

Backup SQL Server Agent Jobs: PowerShell Automation Tips for SQL Server

Backup SQL Server Agent Jobs: PowerShell Automation Tips for SQL Server

SQL Server Agent jobs are an essential component of managing and automating various database tasks. Whether you’re scheduling backups, running maintenance plans, or executing custom scripts, these jobs are crucial for keeping your SQL Server environment running smoothly. But what happens when you need to back up and recover these jobs? Automating the process of scripting out SQL Server Agent jobs ensures you always have a backup of these critical assets. In this blog post, we’ll explore how to use the SqlServer PowerShell module to automate this process.

Now obviously, since SQL Server stores the Agent jobs in msdb, if you want to be able to include SQL Server Agent jobs in a full restore of a SQL Server then it is crucial that you are including msdb in your regular database backup routines. But let’s say that you also wanted to automate a process of scripting out all of the SQL Server Agent jobs on a SQL Server. There are a number of reasons that you may want to do this, some of which include:

  • Backup and recovery: Safeguard your jobs in case of accidental deletion or server failure
  • Migration: Easily transfer jobs to another SQL Server instance
  • Version Control: Keep track of changes made to your jobs over time

While SQL Server Management Studio (SSMS) allows you to manually script out jobs, automating the process with PowerShell is faster, repeatable, and less error-prone.

Prerequisites

To follow along, ensure the following:

  • PowerShell is installed on your machine.
  • The SqlServer PowerShell module is installed. If not, you can install it by running:
Install-Module -Name SqlServer -Scope CurrentUser
  • You have sufficient permissions to access SQL Server Agent and script out jobs.

Import the SqlServer module and create a SMO object:

Once you have validated that the SqlServer module is installed you can create a new SMO (SQL Management Object) connection to SQL Server by executing the following command in PowerShell:

Import-Module SqlServer
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server "YourSQLServerName"

Non-Standard port: If your SQL Server runs on a non-standard port then you can follow the server name by the port number separated by a comma. Like below:

Import-Module SqlServer
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server "YourSQLServerName,1234"

How let’s list out the jobs:

Once you have created the SMO object to your SQL Server you can list out the jobs that are on the server by running the following command:

$srv.JobServer.Jobs

And from here you can script each job by executing the following command:

$srv.JobServer.Jobs | foreach{
    $_.Script() | Out-File "C:\temp\AllMyJobs.sql" -Append  # put in the path here 
}

However…

It may not be very helpful to have everything all mashed into one big file like a SQL Server Agent stew (probably wouldn’t taste very good anyway!). So instead let’s put everything together and add some magic so that we can loop through each job and create a separate script file for each job.

Here is the script that puts it all together and scripts out each SQL Server Agent job to an individually named SQL script file:

clear-host
#------------------------------------------------------------------------------------------
# The following only needs to be run if the SqlServer module has not been installd yet:
# Install-Module -Name SqlServer -AllowClobber -Scope CurrentUser
#------------------------------------------------------------------------------------------

#------------------------------------------------------------------------------------------
# Define the SQL Server name that you want to script out Agent Jobs for.
# If the server is running on a non-standard port then include the port number separated by a comma.
# For example: "ServerName,1234"
#------------------------------------------------------------------------------------------
$serverName = "ServerName"


#------------------------------------------------------------------------------------------
# Define the base folder where the backup scripts will be stored
# If this path does not exist then the script will attempt to create it.
# ** The final output folder will include a subfolder of this path named for the date and time
# with an additional subfolder named "SqlServerAgentJobs"
# For example, something like this: C:\Temp\ScriptedSQLObjects\2025-01-07_135322\SqlServerAgentJobs\
#------------------------------------------------------------------------------------------
[string]$basePath = "C:\Temp\ScriptedSQLObjects\"    




#------------------------------------------------------------------------------------------
# Import the SQL Server module. 
# ** Remember: this line of code will fail if the SqlServer module has not been installed.
# See the line at the top of the script for syntax for installeing the module
#------------------------------------------------------------------------------------------
Import-Module SqlServer


#------------------------------------------------------------------------------------------
# Create a new SMO connection to the SQL Server
#------------------------------------------------------------------------------------------
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server $serverName

           

#------------------------------------------------------------------------------------------
# Define the final output folder path and attempt to create final output folder
#------------------------------------------------------------------------------------------
[string]$fileSuffix = (Get-Date -Format "yyyy-MM-dd_HHmmss").ToString().Trim()
[string]$outputFolder = [System.IO.Path]::Combine($basePath, $fileSuffix)

$objectFolderPath = [System.IO.Path]::Combine($outputFolder, "SqlServerAgentJobs")
if(!([System.IO.Directory]::Exists($objectFolderPath)))
{
    write-host "Creating directory path $objectFolderPath" -ForegroundColor Cyan
    [System.IO.Directory]::CreateDirectory($objectFolderPath) | out-null
}
write-host "Scripting SQL Server Agent Jobs to the following folder: $objectFolderPath" -ForegroundColor Cyan



#------------------------------------------------------------------------------------------
# Now cycle through the jobs and script out each job
#------------------------------------------------------------------------------------------
[int]$jobCnt = 0

$jobs = $srv.JobServer.Jobs
foreach($job in $jobs)
{
    [string]$objName = $job.Name.ToString().Trim()
    
    [string]$fileName = "Jobs_" + $objName + ".sql"
    $fileName = $fileName.Replace("\","").Replace("/", "").Replace(":", "").Replace(";", "").Replace("'", "").Replace("*", "").Replace("<", "").Replace(">", "").Replace("?", "").Replace("!", "").Replace("$", "").Replace("+", "").Replace("]", "").Replace("[", "")
    [string]$fullPath = join-path $objectFolderPath $fileName

    [string]$separator = "--******************************************************************************`r`n-- Job Name: " + $job.Name.ToString().Trim() + "`r`n-- Job Category: " + $job.Category.ToString().Trim() + "`r`n--******************************************************************************`r`n"
    $separator  | Out-File $fullPath -Append
    $job.Script() | Out-File $fullPath -Append
    "`r`n go `r`n"  | Out-File $fullPath -Append
    $jobCnt++
}

Write-Host "Total Agent Jobs scripted for $serverName is $jobCnt" -ForegroundColor Cyan

Monitoring SQL Server Drive Usage: Three Queries to Get the Job Done (mostly)

Monitoring SQL Server Drive Usage: Three Queries to Get the Job Done (mostly)

Understanding the drive space usage on your SQL Server is essential for maintaining optimal performance and preventing storage-related issues. SQL Server offers several ways to monitor drive usage directly from a query. In this post, we’ll explore three different approaches, each with its unique use case and level of detail.

1. Querying Drive Space Using sys.dm_os_volume_stats

The sys.dm_os_volume_stats dynamic management function is a reliable way to retrieve detailed drive usage statistics for volumes associated with SQL Server database files. This method is particularly useful for DBAs who want a comprehensive view of drive usage. One caveat though: sys.dm_os_volume_status is only going to return information for drives associated with SQL Server database files which, for most SQL Servers, isn’t going to include your operating system drive.

Here’s the query:

select distinct
  a.volume_mount_point
, convert(decimal(20,2), (((a.total_bytes / 1024.0) / 1024.0) / 1024.0)) as [DriveTotalSpace_GB]
, convert(decimal(20,2), (((a.available_bytes / 1024.0) / 1024.0) / 1024.0)) as [DriveAvailableSpace_GB]
, convert(decimal(20,2), ((((a.total_bytes - a.available_bytes) / 1024.0) / 1024.0) / 1024.0)) as [DriveUsedSpace_GB]
, convert(decimal(20,2), ((((a.available_bytes / 1024.0) / 1024.0) / 1024.0) / (((a.total_bytes / 1024.0) / 1024.0) / 1024.0)) * 100) as [DriveAvailablePercentage]
, convert(decimal(20,2), (((((a.total_bytes - a.available_bytes) / 1024.0) / 1024.0) / 1024.0) / (((a.total_bytes / 1024.0) / 1024.0) / 1024.0)) * 100) as [DriveUsedPercentage]
from sys.master_files as [b]
cross apply sys.dm_os_volume_stats(b.database_id,b.[file_id]) as [a];

Output: Provides detailed metrics, including total space, available space, used space, and percentages

Why Use This Query?

  • Provides detailed metrics, including total space, available space, used space, and percentages.
  • Helps monitor drive usage specifically for volumes where SQL Server database files are located.
  • Useful for regular storage audits and performance troubleshooting.

Why NOT Use This Query?

  • Does not display usage statistics for drives that are not associated with a SQL Server database

2. Using xp_fixeddrives to retrieve drive free space

The xp_fixeddrives extended stored procedure is a simpler option for quickly checking free space on all drives accessible to SQL Server. While it doesn’t provide as much detail as sys.dm_os_volume_stats, it’s fast and easy to use.

Here’s the query:

exec xp_fixeddrives;

Output:

The result shows each drive letter along with the free space (in MB) available on that drive.

Why Use This Query?

  • Quick and straightforward for a high-level view of available space.
  • Ideal for basic monitoring or initial troubleshooting.
  • No need to join multiple tables or write complex calculations.

Why NOT use this query?

  • Does not provide total drive size or percentages.
  • Limited to drives accessible to SQL Server.

3. Using sys.dm_os_enumerate_fixed_drives

sys.dm_os_enumerage_fixed_drives is DMV enumerates fixed mounted volumes and returns information about them.

SELECT *
    ,convert(decimal(20,2), (((free_space_in_bytes / 1024.0) / 1024.0) / 1024.0)) 'Free Space (GB)'
FROM sys.dm_os_enumerate_fixed_drives;

Why Use This Query?

  • Quick and straightforward for a high-level view of available space.
  • Ideal for basic monitoring or initial troubleshooting.
  • No need to join multiple tables or write complex calculations.

Why NOT use this query?

  • Although sys.dm_os_enumerate_fixed_drives returns more information than xp_fixeddrives, it still lacks in the sense that it is not going to return total drive size so that you can calculate used percentage.

Conclusion:

At the end of the day these are all quick and easy ways to get drive info from SQL Server… however they all lack in one form or another. There are a plethora of other utilities, including some simple PowerShell commands, that can give you a full picture of your drive usage on your SQL Servers.

Query to get SQL Server Agent job schedules

Retrieving SQL Agent Job Schedule Details in SQL Server

If you’re working with SQL Server Agent jobs, it’s often useful to have a quick way to retrieve schedule details for all jobs in your system. Whether you’re auditing schedules, troubleshooting overlapping jobs, or simply documenting your environment, having this query on hand can save you a lot of time.

Below is a SQL query that returns the schedule details for all SQL Agent jobs in your instance. The query pulls data from system tables in the msdb database, which is where SQL Server stores job and schedule information.

--**************************************************************************************************************************************
-- Get SQL Agent Job Schedules
--**************************************************************************************************************************************
declare @exlcude_disabled_schedules bit = 1, @exclude_disabled_jobs bit = 1

select a.[name] as [job_name], a.[enabled] as [IsJobEnabled], c.[enabled] as [IsScheduleEnabled], suser_sname(a.[owner_sid]) as [owner_name]
, c.[name] as [schedule_name]
, case when b.next_run_date > 0 then msdb.dbo.agent_datetime(b.next_run_date, b.next_run_time)  
    ELSE msdb.dbo.agent_datetime('19000101', b.next_run_time) END as [next_run_dttm]
, msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time) as [active_start_dttm]
, msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time) as [active_end_dttm]

, case c.freq_type when 1 then 'One time only'
                 when 4 then 'Daily'
                 when 8 then 'Weekly'
                 when 16 then 'Monthly'
                 when 32 then 'Monthly, relative to freq_interval'
                 when 64 then 'Runs when SQL Server Agent service starts'
                 when 128 then 'Runs when computer is idle'
                 else 'Unknown'
                 end as [freq_type_descr]
, case when c.freq_type = 1 then 'Unused'
       when c.freq_type = 4 then 'Every ' + convert(varchar(10), c.freq_interval) + ' days'
       when c.freq_type = 8 then 'Weekly: every'
                   + case when c.freq_interval & 1 = 1 then ' Sunday' else '' end 
                   + case when c.freq_interval & 2 = 2 then ' Monday' else '' end 
                   + case when c.freq_interval & 4 = 4 then ' Tuesday' else '' end 
                   + case when c.freq_interval & 8 = 8 then ' Wednesday' else '' end 
                   + case when c.freq_interval & 16 = 16 then ' Thursday' else '' end 
                   + case when c.freq_interval & 32 = 32 then ' Friday' else '' end 
                   + case when c.freq_interval & 64 = 64 then ' Saturday' else '' end 

       when c.freq_type = 16 then 'Monthly: on the ' + convert(varchar(10), c.freq_interval) + ' day of every ' + convert(varchar(10), c.freq_recurrence_factor) + ' month(s)'
       when c.freq_type = 32 then 'Monthly: on the ' + case when c.freq_relative_interval = 0 then 'Unused'
                 when c.freq_relative_interval = 1 then 'First'
                 when c.freq_relative_interval = 2 then 'Second'
                 when c.freq_relative_interval = 4 then 'Third'
                 when c.freq_relative_interval = 8 then 'Fourth'
                 when c.freq_relative_interval = 16 then 'Last'
                 else 'Unknown' end + ' ' + case when c.freq_interval = 1 then  'Sunday'
                                                                when c.freq_interval = 2 then  'Moday'
                                                                when c.freq_interval = 3 then  'Tusday'
                                                                when c.freq_interval = 4 then  'Wednesday'
                                                                when c.freq_interval = 5 then  'Thursday'
                                                                when c.freq_interval = 6 then  'Friday'
                                                                when c.freq_interval = 7 then  'Saturday'
                                                                when c.freq_interval = 8 then  'Day'
                                                                when c.freq_interval = 9 then  'Weekday'
                                                                when c.freq_interval = 10 then 'Weekend day'
                                                                end + ' of every ' + convert(varchar(10), c.freq_recurrence_factor) + ' month(s)'
       else 'Unused'
       end as [freq_interval_descr]
, case when c.freq_type = 1 then 'At the specified time'
       when c.freq_subday_type = 1 then 'At the specified time'
       when c.freq_subday_type = 2 then 'Seconds'
       when c.freq_subday_type = 4 then 'Minutes'
       when c.freq_subday_type = 8 then 'Hours'
                 end as [freq_subday_type_descr]
, case 
when c.freq_type = 1 then 'At ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12)
when c.freq_subday_type = 1 then 'At ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12)
when c.freq_subday_type in (2,4,8) then 'Every ' + convert(varchar(10), c.freq_subday_interval) + ' ' + case c.freq_subday_type when 1 then 'At the specified time'
                 when 2 then 'Seconds' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 when 4 then 'Minutes' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 when 8 then 'Hours' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 end 
                 else 'Unused' end as [freq_subday_interval_descr]
from msdb.dbo.sysjobs as [a]
join msdb.dbo.sysjobschedules as [b] on b.job_id = a.job_id
join msdb.dbo.sysschedules as [c] on c.schedule_id = b.schedule_id and (@exlcude_disabled_schedules = 0 or (@exlcude_disabled_schedules = 1 and c.[enabled] = 1))
where (@exclude_disabled_jobs = 0 or (@exclude_disabled_jobs = 1 and a.[enabled] = 1))
order by a.[name] asc, c.[name] asc

SQL Server: Best Practices for deleting and updating data in SQL Server

Make Backups:

Backing up data is not only best practice, but should be a habit. You should always backup your data before performing one-off data updates. This will allow you to do two things. First, you will be able to perform validation by comparing old and new data and draw a conclusion if everything went as planned. Second, you will be able to easily bring the data back to its original state if everything did NOT go as planned.

Create backup tables in a separate backup schema:

I like to always create backup tables in a separate schema dedicated to holding one-off backup tables. This is important for eliminating confusion between one-off backup tables and production database tables. If you are someone who is bad about going back and cleaning up our messes then you will be happy you did this (more on that later). I prefer to create a schema named “bak” and put all of my backup tables in this schema. You can easily create a new schema by executing this command in your database

create schema [bak];

Use of the “bak” schema easily differentiates production tables from one-off backup tables.

A couple of simple backup techniques:

First of all, you should always make sure that you are performing regular backups of your entire database along with transaction log backups to allow you to be able to restore the entire database back to a specified point-in-time. However, for one-off updates you will want to take additional precautions to backup the data that you will be updating. This will allow you to rollback only the data that you updated rather than having to restore the entire database. Here are a couple of simple techniques you can use:

Backup using “select into”:

A simple way to backup a table is by performing a “select into”

-- Backup a table by using "select into"
select * 
    into bak.BackupTable 
from dbo.SourceTable

Using the “select into” technique will create a table with a copy of your data on the fly.

Use the OUTPUT clause to do your backup and update in ONE STEP (sweet!):

My preferred method of backing up data during an update is to use the SQL “OUTPUT” clause. I prefer this method because it performs the backup and the data manipulation all in the same transaction. This guards against the possibility that another process might update your data in between your backup and your update. Here is an example of using the output clause:

First let’s create a table that we can use for testing and insert some data into it:

-- Create a test table
create table dbo.MyTestTable (
  FirstName nvarchar(20)
, MiddleName nvarchar(20)
, LastName nvarchar(20)
)
-- insert a few rows into the test table
insert into dbo.MyTestTable (FirstName, MiddleName, LastName)
values ('George', '', 'Burns')
     , ('Nathan', '', 'Birnbaum')
     , ('Leslie', '', 'Hope')

Now create an empty copy of the table in the “bak” schema. I like to do this by using the “select into” with a “top(0)” clause.

-- create an empty copy of the table in the 'bak' schema
select top(0) * 
into bak.MyBackupTable 
from dbo.MyTestTable;

Now perform an update with an OUTPUT clause:

update dbo.MyTestTable 
set MiddleName = 'Townes' 
output deleted.FirstName,                ------------------------
       deleted.MiddleName,               -- This output clause will "output" the 
       deleted.LastName                  -- original values into the table specified
       into bak.MyBackupTable            -- in the "into"
       (FirstName, MiddleName, LastName) -------------------------------
where FirstName = 'Leslie' and LastName = 'Hope';

You can see the row that was inserted into the backup table with the original values before the update occurred:

select * from bak.MyBackupTable

Create Rollback Scripts:

For any dangerous (or even semi-dangerous) data manipulation operation you should always prepare rollback scripts that can be easily executed to get the data back to its original state. Taking this approach is beneficial from multiple perspectives:

  1. It can be very stressful if your update does not go as planned. Data in production databases if often very time-sensitive and can lead to a horrible domino effect if it is incorrect. You don’t want to be developing a rollback solution as the building is crumbling down around you. It is always best to have the rollback planned before performing the updates.
  2. Often, the development of a rollback plan will help ensure that you won’t ACTUALLY need the rollback plan. Building out a robust rollback plan will help you think of things that you may not have otherwise thought of. There have been many times that I have caught potential pitfalls in my updates because I took the time to look at the data a different way during the development of my rollback plan.

Use transactions:

Perform your data manipulation inside of a transaction. This will allow you rollback your changes if you see that you updated a different number of rows than you were expecting. This will also allow you to validate your update before committing or rolling back.

First execute your “begin transaction” with your data manipulation command:

begin transaction
update dbo.MyTestTable 
set MiddleName = 'Townes' 
where FirstName = 'Leslie' and LastName = 'Hope';

Then validate your update. If all looks correct you can commit your changes by executing “commit” or rollback your changes by executing “rollback”

-- execute this to commit your updates:
commit
-- or this to rollback your updates:
rollback

DON’T FORGET TO COMMIT or ROLLBACK!

When you update data inside of a transaction SQL Server will hold a lock on the data that you updated until the transaction is committed or rolled back. This means that you could cause blocking in the database until you complete your transaction. Trust me on this, you WILL start getting calls from users if you forget to do this! (Don’t ask me how I know this). So, in summary:

  1. Don’t take too long to validate your update
  2. Remember to COMMIT or ROLLBACK… DON’T leave your transaction open.

Commit or rollback until you see RED:

No, I don’t mean that you should get angry at your commits and rollbacks. What I mean by this is that you will find that on occasion you will execute the BEGIN TRANSACTION by itself. Then, forgetting that you already started a transaction, you will also execute it with your update statement… So now you actually have TWO open transactions. Then after you validate your results you will execute the commit (or rollback) just once… leaving a transaction open, and subsequently locked rows (don’t ask me how I know this either).

For this reason I always (and you should too) hit F5 on my commit or rollback until I get the red error message that states the following:

Msg 3902, Level 16, State 1, Line 39
The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.

or

Msg 3903, Level 16, State 1, Line 38
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.

At that point you can be sure that you didn’t leave any open transactions.

Do Peer Reviews:

As the saying goes: “Two heads are better than one”… or is it “two cupcakes are better than one”?

Seriously though, any time that you are doing anything semi-dangerous in a database you should schedule a peer review with a colleague. I can almost guarantee that they will provide a different perspective on the change which may save you a LOT of frustration.

Cleanup your mess:

I can hear my mom telling me: “if you are capable of making the mess, then you are capable of cleaning it up too!”.

That having been said, don’t leave your backup tables out there indefinitely. It just leaves a mess in the database and can cause potential confusion later on down the road. Once your validation is complete and you no longer need the backup data then go back and drop your backup tables. If you feel uncomfortable removing the backup tables right away then be sure to put a reminder in your calendar to go back on a later date and remove them. Whatever method you use, please be sure to cleanup your mess.

Take precautions during script development:

Be sure to comment out dangerous statements in your script during the script development. You don’t want an accidental strike of the F5 key to ruin your week. One additional precaution that I like to take is to wrap my script in an “if” statement that always returns false, like this:

-- My extra safety precaution:
if 0=1
begin

    -- do your script development here

end

Since the “if” statement always evaluates to false, if I accidentally hit the F5 key all of the statements inside of the “if” will get bypassed. It’s “sort of” like commenting everything, but still getting the benefits of intellisense and all the pretty colors.

With great power comes great responsibility!

I remember sitting in the theater watching Spider Man and hearing Uncle Ben tell Peter Parker “with great power comes great responsibility”. Besides being a great movie line, it’s also a very true statement. Remember that if you are someone who has the ability to update data in production databases, then you are also someone who has the ability to potentially ruin your career along with the careers of many of your colleagues with a single keystroke. Exercising caution when manipulating data in a production database is a requirement (unless you really want to ruin your career).

Since we are all human everyone is entitled to the occasional mistake. But don’t let your mistake be the result of being too lazy to take a few extra precautions.

  1. It takes roughly 5 seconds to type out a “begin transacttion”.
  2. It takes roughly 4 minutes to create a backup table and add the output clause to a DML statement.
  3. It takes roughly 15 seconds to type out a “select * into “ statment.
  4. Developing a rollback plan is a little more time consuming… but if you ever need it, you will be glad you did it.

All small prices to pay for the safety they provide.

Query to get the SQL Server agent job schedules

Ever want to have a query to get the SQL Server Agent job schedules? Well here you go:

select a.[name] as [job_name], suser_sname(a.[owner_sid]) as [owner_name]
, c.[name] as [schedule_name]
, msdb.dbo.agent_datetime(b.next_run_date, b.next_run_time) as [next_run_dttm]
, msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time) as [active_start_dttm]
, msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time) as [active_end_dttm]

, case c.freq_type when 1 then 'One time only'
                 when 4 then 'Daily'
                 when 8 then 'Weekly'
                 when 16 then 'Monthly'
                 when 32 then 'Monthly, relative to freq_interval'
                 when 64 then 'Runs when SQL Server Agent service starts'
                 when 128 then 'Runs when computer is idle'
                 else 'Unknown'
                 end as [freq_type_descr]
, case when c.freq_type = 1 then 'Unused'
       when c.freq_type = 4 then 'Every ' + convert(varchar(10), c.freq_interval) + ' days'
       when c.freq_type = 8 then 'Weekly: every'
                   + case when c.freq_interval & 1 = 1 then ' Sunday' else '' end 
                   + case when c.freq_interval & 2 = 2 then ' Monday' else '' end 
                   + case when c.freq_interval & 4 = 4 then ' Tuesday' else '' end 
                   + case when c.freq_interval & 8 = 8 then ' Wednesday' else '' end 
                   + case when c.freq_interval & 16 = 16 then ' Thursday' else '' end 
                   + case when c.freq_interval & 32 = 32 then ' Friday' else '' end 
                   + case when c.freq_interval & 64 = 64 then ' Saturday' else '' end 

       when c.freq_type = 16 then 'Monthly: on the ' + convert(varchar(10), c.freq_interval) + ' day of every ' + convert(varchar(10), c.freq_recurrence_factor) + ' month(s)'
       when c.freq_type = 32 then 'Monthly: on the ' + case when c.freq_relative_interval = 0 then 'Unused'
                 when c.freq_relative_interval = 1 then 'First'
                 when c.freq_relative_interval = 2 then 'Second'
                 when c.freq_relative_interval = 4 then 'Third'
                 when c.freq_relative_interval = 8 then 'Fourth'
                 when c.freq_relative_interval = 16 then 'Last'
                 else 'Unknown' end + ' ' + case when c.freq_interval = 1 then  'Sunday'
                                                                when c.freq_interval = 2 then  'Moday'
                                                                when c.freq_interval = 3 then  'Tusday'
                                                                when c.freq_interval = 4 then  'Wednesday'
                                                                when c.freq_interval = 5 then  'Thursday'
                                                                when c.freq_interval = 6 then  'Friday'
                                                                when c.freq_interval = 7 then  'Saturday'
                                                                when c.freq_interval = 8 then  'Day'
                                                                when c.freq_interval = 9 then  'Weekday'
                                                                when c.freq_interval = 10 then 'Weekend day'
                                                                end + ' of every ' + convert(varchar(10), c.freq_recurrence_factor) + ' month(s)'
       else 'Unused'
       end as [freq_interval_descr]
, case when c.freq_type = 1 then 'At the specified time'
       when c.freq_subday_type = 1 then 'At the specified time'
       when c.freq_subday_type = 2 then 'Seconds'
       when c.freq_subday_type = 4 then 'Minutes'
       when c.freq_subday_type = 8 then 'Hours'
                 end as [freq_subday_type_descr]
, case 
when c.freq_type = 1 then 'At ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12)
when c.freq_subday_type = 1 then 'At ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12)
when c.freq_subday_type in (2,4,8) then 'Every ' + convert(varchar(10), c.freq_subday_interval) + ' ' + case c.freq_subday_type when 1 then 'At the specified time'
                 when 2 then 'Seconds' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 when 4 then 'Minutes' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 when 8 then 'Hours' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 end 
                 else 'Unused' end as [freq_subday_interval_descr]
from sysjobs as [a]
join sysjobschedules as [b] on b.job_id = a.job_id
join sysschedules as [c] on c.schedule_id = b.schedule_id and c.[enabled] = 1
where a.[enabled] = 1

SQL Server – Four different T-SQL scripts to get your SQL Server version

Hello everyone.  I just had someone ask me for a script that will return the version of SQL Server.  Off the top of my head I am aware of 4 different ways.  I describe these four different ways of using a script to get SQL Server version below.

If you have another method please feel free to share in the comments.

 

Method #1: serverproperty()

serverproperty() is a built in system function that returns property information about the server instance.  This function has the ability to return quite a few different properties from your instance of SQL Server but for the purpose of this post I am going to just focus on a few of them.  BOL has a very nice description of this function and the different properties that it can return.

In this script I am returning the following things:

  1. ProductVersion: Version of the instance of SQL Server, in the form of ‘major.minor.build.revision‘.
  2. ProductLevel: Level of the version of the instance of SQL Server. Returns one of the following: ‘RTM’ = Original release version, ‘SPn‘ = Service pack version, ‘CTP’ = Community Technology Preview version.
  3. Edition: Installed product edition of the instance of SQL Server.
select
convert(varchar(128), serverproperty('ProductVersion')) as [InstanceVersion]
, convert(varchar(128), serverproperty('ProductLevel')) as [ProductLevel]
, convert(varchar(128), serverproperty('Edition')) as [Edition];

Here is a sample output from the above query:
Version1

 

Method #2: @@version

@@version is another system function that returns system and build information for the current installation of SQL Server.

select @@version;

Here is example output from the above query:
version2

 

Method #3: exec sp_server_info

exec sp_server_info is a system procedure that returns a plethora of information about your SQL Server instance.  If you pass it the @attribute_id parameter it will return the specific attribute you are looking for.  The query below will sepecifically return the “DBMS_VER” attribute (which displays instance version information).

exec sp_server_info @attribute_id = 2

Here is the output of the above query:
Version3

 

Method #4: exec xp_msver

exec xp_msver is another system procedure that returns yet another plethora of info about your SQL Server instance.  Running it without parameters will return several rows with interesting information but, again, for the purpose of this post I am passing it “ProductVersion” as a parameter in the query below:

exec xp_msver 'ProductVersion'

And here is the output from the above query.
Version4

SQL Server – a server trigger to send an email when security changes occur

So I’ve been tasked with a number of security related tasks lately.  Here is another script that I just finished and, once again, I figured that I would share.

Recently someone asked me if they could get an email alert from SQL Server with the details of the action every time a security related change happened on the server.  He wanted the email to trigger on things like a login being created or a user being added to a database or a login being added to a server role, etc.

Below is the script that I wrote to create a server level trigger that would automatically send an email with the pertinent details whenever one of these events occur.  This script will create a SQL Server trigger at the server level that will send an email to the specified email address with details of changes when various security related events happen on the server.

There are a whole cornucopia of different events that you can tell your trigger to fire on.  I chose several that seemed most pertinent to the task at hand.

** If you happen to use the script be sure to update the email address and the mail profile name to the correct values for your environment.

As always, please comment if you have come up with a better solution and would be willing to share.

Enjoy!


USE [master]
GO

create trigger [trig_security_notification] on all server
for ALTER_AUTHORIZATION_SERVER
 , GRANT_SERVER, REVOKE_SERVER
 , CREATE_LOGIN, DROP_LOGIN, ALTER_LOGIN
 , ALTER_AUTHORIZATION_DATABASE -- ******* Comment out this line SQL Server 2005. ******************
 , GRANT_DATABASE, REVOKE_DATABASE -- ******* Comment out this line SQL Server 2005. ******************
 , CREATE_USER, DROP_USER, ALTER_USER -- ******* Comment out this line SQL Server 2005. ******************
 , ADD_SERVER_ROLE_MEMBER, DROP_SERVER_ROLE_MEMBER -- ******* Comment out this line SQL Server 2005. ******************
 , ADD_ROLE_MEMBER , DROP_ROLE_MEMBER -- ******* Comment out this line SQL Server 2005. ******************
 , CREATE_ROLE, DROP_ROLE, ALTER_ROLE -- ******* Comment out this line SQL Server 2005. ******************
as

 declare @EventType nvarchar(512), @SchemaName nvarchar(128), @ObjectName nvarchar(128), @ObjectType nvarchar(128), @DatabaseName nvarchar(128), @CommandText nvarchar(max), @PostTime datetime, @LoginName nvarchar(128), @ServerName nvarchar(128), @suser_sname nvarchar(128);
 declare @body nvarchar(max), @recipients nvarchar(max), @page_recipients nvarchar(max), @subject nvarchar(255), @from_address nvarchar(max), @mailProfile nvarchar(128);

 select @EventType = EVENTDATA().value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(512)') --as [EventType]
 , @SchemaName = EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(128)') --as [SchemaName]
 , @ObjectName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(128)') --as [ObjectName]
 , @ObjectType = EVENTDATA().value('(/EVENT_INSTANCE/ObjectType)[1]', 'nvarchar(128)') --as [ObjectType]
 , @DatabaseName = EVENTDATA().value(' (/EVENT_INSTANCE/DatabaseName)[1]', 'nvarchar(128)') --as [DatabaseName]
 , @CommandText = EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'nvarchar(max)') --as [CommandText]
 , @PostTime = EVENTDATA().value('(/EVENT_INSTANCE/PostTime)[1]', 'datetime') --as [PostTime]
 , @LoginName = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(128)') --as [LoginName]
 , @ServerName = EVENTDATA().value('(/EVENT_INSTANCE/ServerName)[1]', 'nvarchar(128)') --as [ServerName]
 , @suser_sname = suser_sname(); --as [UserName];

 SELECT @recipients = 'YourEmail@YourDomain.com', @mailProfile = 'YourServerMailProfileName';
 select @from_address = replace(isnull(convert(nvarchar(128), serverproperty('servername')), convert(nvarchar(128), @@servername)), '\', '_') + '@yourdomain.com'

 set @subject = isnull(@EventType, '') + ' occurred on ' + isnull(@ServerName, convert(nvarchar(128), serverproperty('servername'))) + ' at ' + substring(isnull(convert(varchar(50), @PostTime, 121), ''),1,19) + ' by ' + isnull(isnull(@LoginName, @suser_sname), '') + ' on object ' + isnull(@ObjectName, '') + '. Command text: ' + isnull(@CommandText, '');

 set @body = 'An Security change occurred on SQL Server ' + isnull(@ServerName, convert(nvarchar(128), serverproperty('servername'))) + char(13)+char(10) + char(13)+char(10);
 set @body = @body + 'Event Type: ' + isnull(@EventType, '') + char(13) + char(10);
 set @body = @body + 'Action performed by: ' + isnull(isnull(@LoginName, @suser_sname), '') + char(13) + char(10);
 set @body = @body + 'Object Name: ' + isnull(@ObjectName, '') + char(13) + char(10);
 set @body = @body + 'Object Type: ' + isnull(@ObjectType, '') + char(13) + char(10);
 set @body = @body + 'Command Text: ' + isnull(@CommandText, '') + char(13) + char(10);
 set @body = @body + 'Action occurred at: ' + isnull(convert(varchar(50), @PostTime, 121), '') + char(13) + char(10) + char(13) + char(10);
 set @body = @body + 'Please investigate if this action was not expected.';

 --*******************************************************************
 -- send email
 --*******************************************************************
 if len(@subject) > 255 set @subject = substring(@subject,1,255);
 exec msdb.dbo.sp_send_dbmail @profile_name = @mailProfile, @recipients = @recipients, @from_address = @from_address, @reply_to = @from_address, @subject = @subject, @body = @body, @importance = 'High';
 print @body;

GO

ENABLE TRIGGER [trig_security_notification] ON ALL SERVER
GO