New for SQL Server 2022: Use GENERATE_SERIES to generate a series of numbers between two values

New for SQL Server 2022: Use GENERATE_SERIES to generate a series of numbers between two values

There are several different ways to generate a table or series of numbers between two given values. Say that you wanted to populate a table with an integer column and the values should be sequential integers between 1,000 and 1,000,000. Below I cover three different methods of accomplishing this task. The last method covers using the GENERATE_SERIES function that is new to SQL Server 2022.

Method 1: Using a recursive CTE

One way that you could do this is by using a recursive CTE (common table expression) like this:

--*************************************************************************
-- All numbers between 1000 and 1,000,000
-- using a recursive CTE
--*************************************************************************
declare @Start int = 1000;
declare @End int = 1000000;

with cte as 
(
    select @Start as [Num]
    union all
    select Num + 1 from cte where [Num] < @End
)
select 
 [Num]
from cte 
option (maxrecursion 0);


This produces the following result:

Method 2: Harnessing the power of a Cartesian Join

Another way that you could accomplish this is by harnessing the power of a Cartesian join – yes, that thing that you usually do by accident!

--*************************************************************************
-- All numbers between 1000 and 1,000,000
-- without a recursive CTE
--*************************************************************************
declare @Start int = 1000;
declare @End int = 1000000;

with cte as 
(
        SELECT ones.n + (tens.n * 10) + (hundreds.n * 100) + (thousands.n * 1000) + (tenthousands.n * 10000) + (hundredthousands.n * 100000) + (millions.n * 1000000) + (tenmillions.n * 10000000) as [Num]
        FROM  ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) ones(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) tens(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) hundreds(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) thousands(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) tenthousands(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) hundredthousands(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) millions(n)
            cross apply ( VALUES (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) tenmillions(n)
			where ones.n + (tens.n * 10) + (hundreds.n * 100) + (thousands.n * 1000) + (tenthousands.n * 10000) + (hundredthousands.n * 100000) + (millions.n * 1000000) + (tenmillions.n * 10000000) <= @End
)
select 
 [Num]
from cte 
order by [Num] asc;


This produces the same result as the first method:

Method 3: Using GENERATE_SERIES instead:

But new in 2022, SQL Server now has a GENERATE_SERIES function that you can use to produce the same results with much less code.

GENERATE_SERIES generates a series of numbers within a given interval. The interval and the step between series values are defined by the user. By using this built-in function you can generate the same results as above with a single line of code:

 
select value as [Num] from GENERATE_SERIES(1000, 1000000, 1);
 
 

And finally we see the same result as the first two methods… just with MUCH less typing:

Arguments passed to this function:

start

The first parameter that you pass to the function is start. The start parameter specifies the start value of the series that will be generated by the function. In our example above we specified 1,000 as the start because we wanted to generate a series of numbers between 1,000 and 1,000,000, making 1,000 the starting number. This parameter can be specified as a variable, a literal, or a scalar expression of type tinyintsmallintintbigintdecimal, or numeric.

stop

The second parameter that you pass to the function is the stop. The stop parameter specifies the end value of the series that will be generated by the function. In our example above we specified 1,000,000 as the end because we wanted the series to stop at one million. This parameter can be specified as a variable, a literal, or a scalar expression of type tinyintsmallintintbigintdecimal, or numeric. The series stops once the last generated step value exceeds the stop value.

The data type for stop must match the data type for start.

step ]

The last parameter that is passed to the function Indicates the number of values to increment or decrement between steps in the series. step is an expression of type tinyintsmallintintbigintdecimal, or numericstep can be either negative or positive, but can’t be zero (0). In the example above we wanted to have sequential number without any gaps. But if we wanted only even numbers we could have specified “2” as our step value and then each additional number would have stepped up by a value of 2.

This argument is optional. The default value for step is 1 if start is less than stop, otherwise, the default value is -1 if start is greater than stop.

If start is less than stop and a negative value is specified for step, or if start is greater than stop and a positive value is specified for step, an empty result set is returned.

Much simpler! You can find the Microsoft page for this function here

How to Use Secure Strings in PowerShell Scripts

How to Use Secure Strings in PowerShell Scripts

In PowerShell, a secure string is a way to store sensitive information, like passwords, in a more secure manner. Secure strings encrypt the data in memory so it’s not stored in plain text.

On occasion you may have the need to write a PowerShell script that will perform an action that requires it to programmatically login to another application by passing a username and password. When you do this it is a bad idea to hard-code sensitive information (like passwords) into the script because anyone who can access the script would also be able to see the password.

So what do you do instead?

Below we will walk through one option that involves converting the password to a secure string and storing it in a text file for reuse later on.

One last caveat before we get into the details: If you are working with any seriously secure data you should be absolutely sure that you fully understand all security risks. Bad people can, unfortunately, be very smart and I am sure that there are risks with ALL methods (this one included). There are people who devote their entire career to keeping bad people out of sensitive information, and they should be consulted if you are trying to secure nuclear codes… or perhaps your grandmother’s secret paprikash recipe.

Save a secure string to a file for later reuse:

"Sup3rS3cr3tP@ssw0rd!" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "C:\Temp\MySuperPassword.txt"

This will convert the plain text password to an encrypted string of mashed up characters and store it in a text file for reuse later on.

Important to note: The encrypted string is tied to the user and machine that originally encrypted it. This means that you will only be able to decrypt it from the same machine that you originally encrypted it on. This also means that a different user would not be able to decrypt it from any machine.

Retrieve the password from the encrypted file:

ConvertFrom-SecureString -SecureString (Get-Content "C:\Temp\MySuperPassword.txt" | ConvertTo-SecureString) -AsPlainText

This will decrypt the mess of mashed up characters in the text file and give you back a plain text string.

Of course decrypting the string back to plain text defeats some of the purpose of encrypting it in the first place. The better option, if your application is compatible, is to create a PowerShell credential and then pass the credential to the application when logging in. This would keep sensitive password secure as it would not be converted to plain text. Here is an example of creating a credential and passing it to an Invoke-SqlCmd command:

$SecStr = Get-Content "C:\Temp\MySuperPassword.txt" | ConvertTo-SecureString

$Credential = New-Object System.Management.Automation.PSCredential ("MyUserName", $SecStr)

Invoke-SqlCmd -HostName "MyServerName" -Database "master" -Credential $Credential -TrustServerCertificate -Query "select * from sys.databases" | Out-GridView

Using this method keeps the string from being converted back to plain text which decreases the opportunity for unscrupulous people to do nefarious deeds.

One last thing:

You could also capture sensitive information at runtime and store it in a in-memory secure string like this:

$SecStr = Read-Host "Enter your password" -AsSecureString

This will cause the script to pause and wait for input. Whatever the user types in will be stored in an in-memory secure string which can then be passed to a credential or used for other purposes.

Understanding the Risks of SQL Server NOLOCK

Understanding the Risks of SQL Server NOLOCK

Admittedly, I use NOLOCK all the time in my queries. But in my defense, most of the queries that I write ad-hoc are returning information that I’m not that concerned about. Using the NOLOCK table hint in SQL Server can have significant implications, both positive and negative, depending on the use case. While it is commonly used to improve performance by avoiding locks on a table, it has several risks and potential downsides that make it unsuitable in many scenarios. Here’s an overview of why you shouldn’t use NOLOCK indiscriminately:


1. Dirty Reads (Uncommitted Data)

  • Risk: The NOLOCK hint allows the query to read data that is not yet committed by other transactions. This means you may read data that could be rolled back or invalid, leading to inaccurate or inconsistent results.
  • Example: If a transaction updates a table and then rolls back, your NOLOCK query might still read the uncommitted (and ultimately invalid) changes.

2. Non-Repeatable Reads and Phantom Reads

  • Risk: Data can change while the query is executing. For example, rows might be updated or deleted after your query has read them, or new rows might be added during your query, leading to inconsistent results.
  • Impact: If consistency is critical, these issues can cause unexpected behavior, especially in reports or calculations.

3. Incorrect Query Results (Missing or Duplicate Rows)

  • Risk: SQL Server may read pages that are in the process of being updated or moved. This can result in:
    • Missing rows: Rows might not be visible because they are being moved to a new page due to an update.
    • Duplicate rows: Rows might appear twice if your query reads the same row from both the old and new pages during a move operation.
  • Example: In a clustered index, rows might be moved during an update, and NOLOCK could result in reading both the old and new versions of the data.

4. Schema Instability

  • Risk: If the table schema is being modified (e.g., an index is being rebuilt), a NOLOCK query might fail or return inconsistent results.

5. Undocumented Behavior

  • Risk: The behavior of NOLOCK is not always predictable in highly concurrent environments. For example:
    • It might return data from a heap table that is in an intermediate state, resulting in corrupted or invalid results.
    • It may behave differently depending on the SQL Server version and the underlying physical storage engine.

6. False Sense of Performance Improvement

  • Risk: While NOLOCK can reduce blocking and improve performance in high-concurrency environments, it often masks underlying issues like:
    • Poor indexing.
    • Inefficient query design.
    • Overly long transactions.

Rather than using NOLOCK, it’s better to address the root causes of performance issues.


7. Inappropriate for Critical Operations

  • Risk: For operations where data integrity and accuracy are essential (e.g., financial reports, audits, inventory counts), using NOLOCK can lead to incorrect decisions or outcomes based on invalid data.

When Should You Use NOLOCK?

There are rare cases where NOLOCK might be acceptable:

  • Non-critical queries, such as exploratory queries or when generating reports that tolerate occasional inaccuracies.
  • Situations where performance is more important than 100% accuracy, and you explicitly understand and accept the risks.

However, even in these cases, consider alternatives like READ COMMITTED SNAPSHOT ISOLATION (RCSI) or query tuning to achieve better performance without compromising accuracy.


Conclusion

While NOLOCK can improve performance by reducing blocking, it comes with significant risks to data accuracy and consistency. It should be used only after understanding and accepting its implications, and only when alternatives like query tuning or isolation-level changes are insufficient or impractical. For critical systems or operations, avoid using NOLOCK to ensure reliable and accurate results.

Now, we should all go look at Brent Ozar’s post about NOLOCK here, where he gives examples of how to produce those bad things that I mentioned above (Brent is a SUPER smart guy by the way!).

Monitoring SQL Server Drive Usage: Three Queries to Get the Job Done (mostly)

Monitoring SQL Server Drive Usage: Three Queries to Get the Job Done (mostly)

Understanding the drive space usage on your SQL Server is essential for maintaining optimal performance and preventing storage-related issues. SQL Server offers several ways to monitor drive usage directly from a query. In this post, we’ll explore three different approaches, each with its unique use case and level of detail.

1. Querying Drive Space Using sys.dm_os_volume_stats

The sys.dm_os_volume_stats dynamic management function is a reliable way to retrieve detailed drive usage statistics for volumes associated with SQL Server database files. This method is particularly useful for DBAs who want a comprehensive view of drive usage. One caveat though: sys.dm_os_volume_status is only going to return information for drives associated with SQL Server database files which, for most SQL Servers, isn’t going to include your operating system drive.

Here’s the query:

select distinct
  a.volume_mount_point
, convert(decimal(20,2), (((a.total_bytes / 1024.0) / 1024.0) / 1024.0)) as [DriveTotalSpace_GB]
, convert(decimal(20,2), (((a.available_bytes / 1024.0) / 1024.0) / 1024.0)) as [DriveAvailableSpace_GB]
, convert(decimal(20,2), ((((a.total_bytes - a.available_bytes) / 1024.0) / 1024.0) / 1024.0)) as [DriveUsedSpace_GB]
, convert(decimal(20,2), ((((a.available_bytes / 1024.0) / 1024.0) / 1024.0) / (((a.total_bytes / 1024.0) / 1024.0) / 1024.0)) * 100) as [DriveAvailablePercentage]
, convert(decimal(20,2), (((((a.total_bytes - a.available_bytes) / 1024.0) / 1024.0) / 1024.0) / (((a.total_bytes / 1024.0) / 1024.0) / 1024.0)) * 100) as [DriveUsedPercentage]
from sys.master_files as [b]
cross apply sys.dm_os_volume_stats(b.database_id,b.[file_id]) as [a];

Output: Provides detailed metrics, including total space, available space, used space, and percentages

Why Use This Query?

  • Provides detailed metrics, including total space, available space, used space, and percentages.
  • Helps monitor drive usage specifically for volumes where SQL Server database files are located.
  • Useful for regular storage audits and performance troubleshooting.

Why NOT Use This Query?

  • Does not display usage statistics for drives that are not associated with a SQL Server database

2. Using xp_fixeddrives to retrieve drive free space

The xp_fixeddrives extended stored procedure is a simpler option for quickly checking free space on all drives accessible to SQL Server. While it doesn’t provide as much detail as sys.dm_os_volume_stats, it’s fast and easy to use.

Here’s the query:

exec xp_fixeddrives;

Output:

The result shows each drive letter along with the free space (in MB) available on that drive.

Why Use This Query?

  • Quick and straightforward for a high-level view of available space.
  • Ideal for basic monitoring or initial troubleshooting.
  • No need to join multiple tables or write complex calculations.

Why NOT use this query?

  • Does not provide total drive size or percentages.
  • Limited to drives accessible to SQL Server.

3. Using sys.dm_os_enumerate_fixed_drives

sys.dm_os_enumerage_fixed_drives is DMV enumerates fixed mounted volumes and returns information about them.

SELECT *
    ,convert(decimal(20,2), (((free_space_in_bytes / 1024.0) / 1024.0) / 1024.0)) 'Free Space (GB)'
FROM sys.dm_os_enumerate_fixed_drives;

Why Use This Query?

  • Quick and straightforward for a high-level view of available space.
  • Ideal for basic monitoring or initial troubleshooting.
  • No need to join multiple tables or write complex calculations.

Why NOT use this query?

  • Although sys.dm_os_enumerate_fixed_drives returns more information than xp_fixeddrives, it still lacks in the sense that it is not going to return total drive size so that you can calculate used percentage.

Conclusion:

At the end of the day these are all quick and easy ways to get drive info from SQL Server… however they all lack in one form or another. There are a plethora of other utilities, including some simple PowerShell commands, that can give you a full picture of your drive usage on your SQL Servers.

Query to get SQL Server Agent job schedules

Retrieving SQL Agent Job Schedule Details in SQL Server

If you’re working with SQL Server Agent jobs, it’s often useful to have a quick way to retrieve schedule details for all jobs in your system. Whether you’re auditing schedules, troubleshooting overlapping jobs, or simply documenting your environment, having this query on hand can save you a lot of time.

Below is a SQL query that returns the schedule details for all SQL Agent jobs in your instance. The query pulls data from system tables in the msdb database, which is where SQL Server stores job and schedule information.

--**************************************************************************************************************************************
-- Get SQL Agent Job Schedules
--**************************************************************************************************************************************
declare @exlcude_disabled_schedules bit = 1, @exclude_disabled_jobs bit = 1

select a.[name] as [job_name], a.[enabled] as [IsJobEnabled], c.[enabled] as [IsScheduleEnabled], suser_sname(a.[owner_sid]) as [owner_name]
, c.[name] as [schedule_name]
, case when b.next_run_date > 0 then msdb.dbo.agent_datetime(b.next_run_date, b.next_run_time)  
    ELSE msdb.dbo.agent_datetime('19000101', b.next_run_time) END as [next_run_dttm]
, msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time) as [active_start_dttm]
, msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time) as [active_end_dttm]

, case c.freq_type when 1 then 'One time only'
                 when 4 then 'Daily'
                 when 8 then 'Weekly'
                 when 16 then 'Monthly'
                 when 32 then 'Monthly, relative to freq_interval'
                 when 64 then 'Runs when SQL Server Agent service starts'
                 when 128 then 'Runs when computer is idle'
                 else 'Unknown'
                 end as [freq_type_descr]
, case when c.freq_type = 1 then 'Unused'
       when c.freq_type = 4 then 'Every ' + convert(varchar(10), c.freq_interval) + ' days'
       when c.freq_type = 8 then 'Weekly: every'
                   + case when c.freq_interval & 1 = 1 then ' Sunday' else '' end 
                   + case when c.freq_interval & 2 = 2 then ' Monday' else '' end 
                   + case when c.freq_interval & 4 = 4 then ' Tuesday' else '' end 
                   + case when c.freq_interval & 8 = 8 then ' Wednesday' else '' end 
                   + case when c.freq_interval & 16 = 16 then ' Thursday' else '' end 
                   + case when c.freq_interval & 32 = 32 then ' Friday' else '' end 
                   + case when c.freq_interval & 64 = 64 then ' Saturday' else '' end 

       when c.freq_type = 16 then 'Monthly: on the ' + convert(varchar(10), c.freq_interval) + ' day of every ' + convert(varchar(10), c.freq_recurrence_factor) + ' month(s)'
       when c.freq_type = 32 then 'Monthly: on the ' + case when c.freq_relative_interval = 0 then 'Unused'
                 when c.freq_relative_interval = 1 then 'First'
                 when c.freq_relative_interval = 2 then 'Second'
                 when c.freq_relative_interval = 4 then 'Third'
                 when c.freq_relative_interval = 8 then 'Fourth'
                 when c.freq_relative_interval = 16 then 'Last'
                 else 'Unknown' end + ' ' + case when c.freq_interval = 1 then  'Sunday'
                                                                when c.freq_interval = 2 then  'Moday'
                                                                when c.freq_interval = 3 then  'Tusday'
                                                                when c.freq_interval = 4 then  'Wednesday'
                                                                when c.freq_interval = 5 then  'Thursday'
                                                                when c.freq_interval = 6 then  'Friday'
                                                                when c.freq_interval = 7 then  'Saturday'
                                                                when c.freq_interval = 8 then  'Day'
                                                                when c.freq_interval = 9 then  'Weekday'
                                                                when c.freq_interval = 10 then 'Weekend day'
                                                                end + ' of every ' + convert(varchar(10), c.freq_recurrence_factor) + ' month(s)'
       else 'Unused'
       end as [freq_interval_descr]
, case when c.freq_type = 1 then 'At the specified time'
       when c.freq_subday_type = 1 then 'At the specified time'
       when c.freq_subday_type = 2 then 'Seconds'
       when c.freq_subday_type = 4 then 'Minutes'
       when c.freq_subday_type = 8 then 'Hours'
                 end as [freq_subday_type_descr]
, case 
when c.freq_type = 1 then 'At ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12)
when c.freq_subday_type = 1 then 'At ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12)
when c.freq_subday_type in (2,4,8) then 'Every ' + convert(varchar(10), c.freq_subday_interval) + ' ' + case c.freq_subday_type when 1 then 'At the specified time'
                 when 2 then 'Seconds' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 when 4 then 'Minutes' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 when 8 then 'Hours' + ' between ' + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_start_date, c.active_start_time), 121),12, 12) + ' and '  + substring(convert(varchar(23), msdb.dbo.agent_datetime(c.active_end_date, c.active_end_time), 121),12, 12)
                 end 
                 else 'Unused' end as [freq_subday_interval_descr]
from msdb.dbo.sysjobs as [a]
join msdb.dbo.sysjobschedules as [b] on b.job_id = a.job_id
join msdb.dbo.sysschedules as [c] on c.schedule_id = b.schedule_id and (@exlcude_disabled_schedules = 0 or (@exlcude_disabled_schedules = 1 and c.[enabled] = 1))
where (@exclude_disabled_jobs = 0 or (@exclude_disabled_jobs = 1 and a.[enabled] = 1))
order by a.[name] asc, c.[name] asc

SQL Server – Script out database permissions

So I recently had a need to script out all of the database permissions for a particular database in SQL Server.  So I just threw together a script that will do this for me.

I thought I would share.  Here you go:

 


declare @user varchar(128);
set @user = ''; -- leave blank to get permissions for all users;

select
case a.[state]
when 'D' then 'DENY'
when 'R' then 'REVOKE'
when 'G' then 'GRANT'
when 'W' then 'GRANT'
end + ' ' +
case a.[type]
when 'AL' then 'ALTER'
when 'ALAK' then 'ALTER ANY ASYMMETRIC KEY'
when 'ALAR' then 'ALTER ANY APPLICATION ROLE'
when 'ALAS' then 'ALTER ANY ASSEMBLY'
when 'ALCF' then 'ALTER ANY CERTIFICATE'
when 'ALDS' then 'ALTER ANY DATASPACE'
when 'ALED' then 'ALTER ANY DATABASE EVENT NOTIFICATION'
when 'ALFT' then 'ALTER ANY FULLTEXT CATALOG'
when 'ALMT' then 'ALTER ANY MESSAGE TYPE'
when 'ALRL' then 'ALTER ANY ROLE'
when 'ALRT' then 'ALTER ANY ROUTE'
when 'ALSB' then 'ALTER ANY REMOTE SERVICE BINDING'
when 'ALSC' then 'ALTER ANY CONTRACT'
when 'ALSK' then 'ALTER ANY SYMMETRIC KEY'
when 'ALSM' then 'ALTER ANY SCHEMA'
when 'ALSV' then 'ALTER ANY SERVICE'
when 'ALTG' then 'ALTER ANY DATABASE DDL TRIGGER'
when 'ALUS' then 'ALTER ANY USER'
when 'AUTH' then 'AUTHENTICATE'
when 'BADB' then 'BACKUP DATABASE'
when 'BALO' then 'BACKUP LOG'
when 'CL' then 'CONTROL'
when 'CO' then 'CONNECT'
when 'CORP' then 'CONNECT REPLICATION'
when 'CP' then 'CHECKPOINT'
when 'CRAG' then 'CREATE AGGREGATE'
when 'CRAK' then 'CREATE ASYMMETRIC KEY'
when 'CRAS' then 'CREATE ASSEMBLY'
when 'CRCF' then 'CREATE CERTIFICATE'
when 'CRDB' then 'CREATE DATABASE'
when 'CRDF' then 'CREATE DEFAULT'
when 'CRED' then 'CREATE DATABASE DDL EVENT NOTIFICATION'
when 'CRFN' then 'CREATE FUNCTION'
when 'CRFT' then 'CREATE FULLTEXT CATALOG'
when 'CRMT' then 'CREATE MESSAGE TYPE'
when 'CRPR' then 'CREATE PROCEDURE'
when 'CRQU' then 'CREATE QUEUE'
when 'CRRL' then 'CREATE ROLE'
when 'CRRT' then 'CREATE ROUTE'
when 'CRRU' then 'CREATE RULE'
when 'CRSB' then 'CREATE REMOTE SERVICE BINDING'
when 'CRSC' then 'CREATE CONTRACT'
when 'CRSK' then 'CREATE SYMMETRIC KEY'
when 'CRSM' then 'CREATE SCHEMA'
when 'CRSN' then 'CREATE SYNONYM'
when 'CRSO' then 'CREATE SEQUENCE'
when '' then 'Applies to: SQL Server 2012 through SQL Server 2014.'
when 'CRSV' then 'CREATE SERVICE'
when 'CRTB' then 'CREATE TABLE'
when 'CRTY' then 'CREATE TYPE'
when 'CRVW' then 'CREATE VIEW'
when 'CRXS' then 'CREATE XML SCHEMA COLLECTION'
when 'DL' then 'DELETE'
when 'EX' then 'EXECUTE'
when 'IM' then 'IMPERSONATE'
when 'IN' then 'INSERT'
when 'RC' then 'RECEIVE'
when 'RF' then 'REFERENCES'
when 'SL' then 'SELECT'
when 'SN' then 'SEND'
when 'SPLN' then 'SHOWPLAN'
when 'SUQN' then 'SUBSCRIBE QUERY NOTIFICATIONS'
when 'TO' then 'TAKE OWNERSHIP'
when 'UP' then 'UPDATE'
when 'VW' then 'VIEW DEFINITION'
when 'VWCT' then 'VIEW CHANGE TRACKING'
when 'VWDS' then 'VIEW DATABASE STATE'
end +
CASE a.[class]
WHEN 0 then '' --'DATABASE'
WHEN 1 then ' on object::' + case when schema_name(d.[schema_id]) is null then ''
else '[' + schema_name(d.[schema_id]) + '].'
end + '[' + object_name(a.major_id) + ']' +
case when a.minor_id <> 0 then ' (' + rtrim(g.name) + ')'
else '' end -- 'OBJECT_OR_COLUMN'
WHEN 3 then ' on schema::[' + schema_name(a.[major_id]) + ']' --'SCHEMA'
WHEN 4 then ' on user::[' + rtrim(c.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'DATABASE_PRINCIPAL'
WHEN 5 then ' on assembly::[' + rtrim(h.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'ASSEMBLY'
WHEN 6 then ' on type::[' + schema_name(i.[schema_id]) + '].[' + rtrim(i.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'TYPE'
WHEN 10 then ' on xml schema collection::[' + rtrim(j.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'XML_SCHEMA_COLLECTION'
WHEN 15 then ' on message type::[' + rtrim(s.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'MESSAGE_TYPE'
WHEN 16 then ' on service contract::[' + rtrim(p.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'SERVICE_CONTRACT'
WHEN 17 then ' on service::[' + rtrim(q.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'SERVICE'
WHEN 18 then ' on remote service binding::[' + rtrim(r.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'REMOTE_SERVICE_BINDING'
WHEN 19 then ' on route::[' + rtrim(n.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'ROUTE'
WHEN 23 then ' on full text catalog::[' + rtrim(o.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'FULL_TEXT_CATALOG'
WHEN 24 then ' on symmetric key::[' + rtrim(m.name) collate SQL_Latin1_General_CP1_CI_AS + ']' -- 'SYMMETRIC_KEYS'
WHEN 25 then ' on certificate::[' + rtrim(k.name) collate SQL_Latin1_General_CP1_CI_AS + ']' --'CERTIFICATE'
WHEN 26 then ' on asymmetric key::[' + rtrim(l.name) collate SQL_Latin1_General_CP1_CI_AS + ']' -- 'ASYMMETRIC_KEY'
else ''
end +
' to [' + rtrim(b.name) collate SQL_Latin1_General_CP1_CI_AS + ']' +
case a.[state]
when 'W' then ' WITH GRANT OPTION;'
else ';'
end
from sys.database_permissions as [a]
join sys.database_principals as [b] on b.principal_id = a.grantee_principal_id and (isnull(@user, '') = '' or b.name = @user)
left outer join sys.database_principals as [c] on c.principal_id = a.major_id
left outer join sys.all_objects as [d] on d.[object_id] = a.major_id
left outer join sys.all_objects as [e] on e.[object_id] = a.minor_id
left outer join sys.symmetric_keys as [f] on f.symmetric_key_id = a.major_id
left outer join sys.columns as [g] on g.[object_id] = a.major_id and g.column_id = a.minor_id
left outer join sys.assemblies as [h] on h.assembly_id = a.major_id
left outer join sys.types as [i] on i.user_type_id = a.major_id
left outer join sys.xml_schema_collections as [j] on j.xml_collection_id = a.major_id
left outer join sys.certificates as [k] on k.certificate_id = a.major_id
left outer join sys.asymmetric_keys as [l] on l.asymmetric_key_id = a.major_id
left outer join sys.symmetric_keys as [m] on m.symmetric_key_id = a.major_id
left outer join sys.routes as [n] on n.route_id = a.major_id
left outer join sys.fulltext_catalogs as [o] on o.fulltext_catalog_id = a.major_id
left outer join sys.service_contracts as [p] on p.service_contract_id = a.major_id
left outer join sys.services as [q] on q.service_id = a.major_id
left outer join sys.remote_service_bindings as [r] on r.remote_service_binding_id = a.major_id
left outer join sys.service_message_types as [s] on s.message_type_id = a.major_id

 

If anyone has a better method of scripting out database permissions I would love to hear it.

Thanks!