dans.blog


The miscellaneous ramblings and thoughts of Dan G. Switzer, II

Moving data between source and destination tables using the OUTPUT clause in SQL Server 2005+

One of the features introduced in Microsoft SQL 2005 that I think really goes largely unused in the "OUTPUT" clause. It certainly was a feature that went unnoticed by me for a long time and even once I became aware of the feature, didn't really put it to much use. However, lately I've been refactored a ton of old SQL into objects and with this refactoring I've been making high use of the OUTPUT clause in order to help create audit trails. It's been a great way to know which rows were affected in your CRUD operations.

There's really a ton of useful things you can do with the OUTPUT clause, but one thing it allows you to do is to migrate data from one table to another in a single atomic operation.

Let's look at the syntax:

-- the table to move data from
delete
    dbo.SourceTable

-- get the data removed from the "source" table
output
    deleted.column1, deleted.column2, deleted.column3

-- insert the deleted row into the "destination" table
into dbo.DestinationTable

-- your where clause for the "delete" operation
where
    Column1 > 2000
and
    Column2 = 1234

Here's a fully working example that you can run in SSMS:

-- declare temp tables
declare @original table (id int identity(1,1), name varchar(20), company varchar(20), dob datetime)
declare @new table (id int, name varchar(20), company varchar(20), dob datetime)

-- insert some fake data into our "original" table
insert into @original (name, company, dob) values ('Bill Gates', 'Microsoft', '1955-10-28')
insert into @original (name, company, dob) values ('Paul Allen', 'Microsoft', '1953-01-21')
insert into @original (name, company, dob) values ('Steve Jobs', 'Apple', '1955-02-24')
insert into @original (name, company, dob) values ('Steve Wozniak', 'Apple', '1950-08-11')

-- show the results
select 'original' as tableName, * from @original
select 'new' as tableName, * from @new

/*
  here is the core SQL used to do the move
*/
delete
  @original

-- get the data removed from the @original table
output
  deleted.id, deleted.name, deleted.company, deleted.dob
-- insert the deleted row into the @new table
into @new

-- your where clause for the "delete" operation
where
  company = 'Microsoft'


-- show the new results 
select 'original' as tableName, * from @original
select 'new' as tableName, * from @new

If you run this example in SSMS, you'll see the two "Microsoft" records move from the @original table into the @new table.

NOTE:
The @original and @new tables are simple table variables (which are like temp tables.)

There really is a ton of useful things you can do with the OUTPUT clause, so I highly recommend playing around with the OUTPUT clause if you're using SQL Server 2005 or later!


Simple managing of one-to-many relationships w/audit trails using the OUTPUT clause in MSSQL 2005+

Earlier today I posted a generic SQL solution on easily manage one-to-many data in your SQL database. However, if you're using SQL Server 2005 (or above) you can use the OUTPUT clause to add an resultset to your SQL that will contain an audit trail of what data was removed and what data was added.

In the example, we had a database schema that looked like this:

image[10]

The User table holds the information on all your Users, the UserRole table holds all the roles of each User and the Role table contains a list of valid roles that a user can hold.

If we want to update all the roles for a specific User, I showed you could use the following 2-step SQL process to update all their roles without deleting or changing existing roles that are still valid:

delete
from
  UserRole
where
  UserId = 1
-- IF WE HAVE NO ROLES TO ASSIGN, DO NOT RUN ANY CODE BELOW THIS LINE
-- this deletes only rows not in our new selection
and
  RoleId not in (1,2,3)

insert into
  UserRole
(
  UserId, RoleId
)

select
  1 as UserId
  , RoleId
from
  Role
where
  RoleId in (1,2,3)
-- only grab roles not already inserted into the database
and
  RoleId not in (
    select
      RoleId
    from
      UserRole
    where
      UserId = 1
  )

However, we can take this code one step further we can use the OUTPUT clause in SQL Server 2005+ to capture the changed data so we can return a resultset with our query that provides an audit trail of all the changes:

-- create a table to store results from the various SQL operations
declare @results table (Type varchar(20), UserId int, SecurityLevelId int)

delete
from
  UserRole
-- store the records deleted into the temp table
output
  'deleted' as Type, Deleted.UserId, Deleted.RoleId into @results

where
  UserId = 1
-- IF WE HAVE NO ROLES TO ASSIGN, DO NOT RUN ANY CODE BELOW THIS LINE, EXCEPT FOR THE FINAL SQL STATEMENT
-- this deletes only rows not in our new selection
and
  RoleId not in (1,2,3)

insert into
  UserRole
(
  UserId, RoleId
)

-- store the records deleted into the temp table
output
  'inserted' as Type, Inserted.UserId, Inserted.RoleId into @results

select
  1 as UserId
  , RoleId
from
  Role
where
  RoleId in (1,2,3)
-- only grab roles not already inserted into the database
and
  RoleId not in (
    select
      RoleId
    from
      UserRole
    where
      UserId = 1
  )

-- output the audit trail of the delete & insert operations
select
  *
from
  @results

With the addition of the OUTPUT clause, our query will now return resultset with 3 columns:

  • Type – This a string that will be either "deleted" or "inserted", based on the operation that occurred
  • UserId – The userId that was removed (in this use case would always be the same userId)
  • RoleId – The role that was modified

Now you've got a great little SQL snippet that you can use that builds an audit trail that you can use to track that changes to your UserRole table! While this is a pretty simple example, you could use this basic syntax anytime you need to manage this type of data.


Easily manage one-to-many data in your SQL database without using multiple INSERT INTO statements

A task that most web developers have faced at one point or another is how to cleanly manage updating data in a one-to-many relationship in a relational database. For example, let's say that you have schema looks something like this:

image

The User table holds the information on all your Users, the UserRole table holds all the roles of each User and the Role table contains a list of valid roles that a user can hold. This type of schema gives us a ton of flexibility, but management of the UserRole table can often be tricky. The most common method I've seen developers use to maintain a table like this is to delete all the data in the UserRole table for a user and use a loop to do an insert on each role that needs to be assigned. This leads to SQL that looks something like:

delete
from
  UserRole
where
  UserId = 1

insert into
  UserRole
(
  UserId, RoleId
) values (
  1, 1
)

insert into
  UserRole
(
  UserId, RoleId
) values (
  1, 2
)

insert into
  UserRole
(
  UserId, RoleId
) values (
  1, 3
)

While this works, it has several issues:

  1. If there are many roles being modified, this can lead to a very lengthy SQL statement and/or many database transactions (depending on how your code is implemented.)
  2. There is no inherit constraint to ensure that an invalid role does not get inserted. If your database has foreign key relationships defined between the UserRole.RoleId and Role.RoleId columns your database would throw an error, but wouldn't it be better just to ignore invalid values?
  3. Since you are deleting all the roles on each update, your assigning new UserRoleId to roles that may not have changed. This can lead to more work on your database's side for managing indexes, etc.

Wouldn't it be nicer if you could just supply a list of the RoleIds you want to update instead of doing a bunch of individual INSERT INTO statements?

Well there is actually a very nice way to manage this that should work in any modern database. The trick is to use an INSERT INTO with SELECT statement to manage your data. The basic idea is instead of doing a bunch of individual INSERT INTO statements, we can write a single SQL statement that looks like this:

insert into
  UserRole
(
  UserId, RoleId
)

select
  1 as UserId
  , RoleId
from
  Role
where
  RoleId in (1,2,3)

What's nice about this code is work's around our first two issues by reducing the SQL statement and ensures that only valid Roles will actually get inserted into the the UserRole table—since we're selecting the valid rows right from the Role table.

But how do we solve issue #3, so that we don't alter roles that haven't changed? The trick is to only delete the roles not in our new list of values, then modify our insert to only insert rows not already assigned to us. So our complete code now looks like this:

delete
from
  UserRole
where
  UserId = 1
-- IF WE HAVE NO ROLES TO ASSIGN, DO NOT RUN ANY CODE BELOW THIS LINE
-- this deletes only rows not in our new selection
and
  RoleId not in (1,2,3)

insert into
  UserRole
(
  UserId, RoleId
)

select
  1 as UserId
  , RoleId
from
  Role
where
  RoleId in (1,2,3)
-- only grab roles not already inserted into the database
and
  RoleId not in (
    select
      RoleId
    from
      UserRole
    where
      UserId = 1
  )

This new code gives us 2 simple SQL statements (that can be run in a single transaction) which will now remove any roles that are no longer assigned to a user and only insert the new rows that need to be added. We're not altering data that hasn't changed.

Coming shortly, I'll post a follow up article that's specific to SQL Server 2005 (and above) on how you can modify the above example to return a query that will contains an audit of all the changes applied to the database.


Performance issues with <cfqueryparam /> on first execution when using Microsoft SQL Server

The last couple of weeks have been interesting for me. I spent a bulk of the time tuning queries to optimize for performance and a learned a number of interesting things about Microsoft SQL Server and where some of the bottlenecks in our application actually lie.

One interesting situations I ran into was that I wasn't able to replicate some of the performance issues I was seeing in our application when I'd run the same query from Query Analyzer. Now there's a number of reasons why this could happen, but in my case I thought I was running the query in manner that mimicked what was being executed on the server—but it turns out I was missing a key step in the process and I'll get to that in a minute.

So, what was the problem? Let me explain.

Let's say we have the following query in ColdFusion. We'll call this "Method 1":

select
  Name, Email, Phone
from
  Employee
where
  DepartmentId = <cfqueryparam cfsqltype="cf_sql_integer" value="1" />

When you use the <cfqueryparam /> tag, ColdFusion translates the code into a bound parameter. This has the benefit of helping prevent SQL injections, but it often helps with performance because it allows the backend database (like Microsoft SQL Server) to cache the execution plan—which can help speed up performance. So roughly speaking, when the following query is executed on the server, it's translated to (which we'll call "Method 2":)

declare @p1 int
set @p1 = 1

select
  Name, Email, Phone
from
  Employee
where
  DepartmentId = @p1

You'll see this if you turn on ColdFusion debugging, and you can verify that even the name @p1 is used if you try manually adding a declare @p1 statement to your ColdFusion query.

However, when I'd run the bound version of my query in Microsoft SQL Server Management Studio, I was seeing dramatically different results. With a cleared cache, the "Method 1" query was running 7 times slower than "Method 2" when I'd run it in a query analyzer window. In the process of trying to troubleshoot the problem, I tried running the following in my ColdFusion query (which we'll call "Method 3":)

declare @p1 int
set @p1 = <cfqueryparam cfsqltype="cf_sql_integer" value="1" />

select
  Name, Email, Phone
from
  Employee
where
  DepartmentId = @p1

What's interesting about "Method 3" is I was now seeing the exact same performance as I was seeing when I run "Method 2" from my desktop locally. This had my confused and I knew I had to be missing something. I was obviously missing a key execution step that was missing between "Method 1" and "Method 2". What's even more confusing is that I had already been running SQL Profiler on the database server to catch the TSQL statements being executed—it's how I originally captured the query I was using in "Method  2" to run locally. The problem was I was missing a key event that I should have been logging—the RPC:Starting and RPC:Completed events.

Once I enabled the RPC:Starting event, I saw where the bottleneck was coming through.

When you use <cfqueryparam /> in SQL Server, the query isn't just translated directly into a bound statement, but instead it's first passed into the sp_prepexec system stored procedure.  So, instead of actually executing "Method 2" immediately, something like the following executed first:

declare @p1 int
set @p1=68
exec sp_prepexec @p1 output,N'@P1 int',N'select
  Name, Email, Phone
from
  Employee
where
  DepartmentId = @p1',190
select @p1

It was the call to the sp_prepexec stored procedure that was causing the huge difference in performance between "Method 1" and "Method 2". It turns out it was taking this stored procedure a lot of time to figure out the best execution plan to use before it could cache it.

To fix the problem, I just spent some time with my query (which actually ended up involving modifying some core system views) so that sp_prepexec could analyze my query and returned the cached plan faster.

So, if you're having performance issues—especially on first run execution—you might want to monitor your server and see if the bottleneck isn't with the sp_prepexec trying to figure out the best plan to cache.


Search for SQL in Microsoft SQL Server Management Studio with Red Gate's SQL Search

If you do any amount of SQL Server development, at some point in time you've come across a situation where you need to search your database for all occurrences of a table, column or some specific SQL statement that might be used in a view, stored procedure, trigger etc. In the past I've ended up just dump the db schema to a SQL file and searched the file manually, but that's pretty slow and not very elegant.

This morning though I found a better way using Red Gate's SQL Search, which is a free add-in for Microsoft SQL Server Management Studio. This add-in (which is currently free, but will eventually be integrated into the commercial SQL Prompt add-in) allows you to quickly search for strings within your database schema. What's really nice is double-clicking on a matching object takes you right to the object in the Object Explorer—which can save you a lot of time when you have a really large database (I work in an application w/several hundred tables!)

If you use SQL Server Management Studio, I highly recommend you download SQL Search and try it out.

NOTE:
If you're using Microsoft SQL Server Management Studio Express version, or you're "The Add-in 'SQL Search 1.0' failed to load or caused an exception. (Error number: 80070002)" when trying to start Management Studio after installing the add-in, you'll need the extensibility.dll installed before the add-in will work. You'll need to un-installing SQL Search, then download and install extensibility.dll and then re-install SQL Search.


Rounding to the nearest fraction (i.e. specific decimal place)

I'm working on some "star rating" code and I really wanted to round numbers of to the nearest fraction. While the math is really basic, my math skills are rusty and this took me longer to solve than I care to admit. So I thought I'd just blog the solution for anyone else that might find it useful:

round( value/fraction ) * fraction

Like I said, it's very basic math but is very handy when you want to round to something other than to the closest integer.


Using MSSQL to output time in days, hours and minutes

I was working on some code today where I wanted to output a time span in the format 1d 4h 36m (i.e. "1 day, 4 hours and 36 minutes.") I wanted a pure SQL solution so that I didn't have to worry about formatting later. Here's some example SQL that will format a time span by days, hours and minutes.

declare @startTime datetime
set @startTime = '2009-07-11 14:19:40.000'

declare @endTime datetime
set @endTime = getUtcDate()

select
    convert(varchar(40), dateDiff(mi, @startTime, @endTime)/(24*60)) + 'd '
  + convert(varchar(40), dateDiff(mi, @startTime, @endTime)%(24*60)/60) + 'h '
  + convert(varchar(40), dateDiff(mi, @startTime, @endTime)%60) + 'm' as Format1
  , case
      when (((dateDiff(mi, @startTime, @endTime)/(24*60))) > 0) then
          convert(varchar(40), dateDiff(mi, @startTime, @endTime)/(24*60)) + 'd '
        + convert(varchar(40), dateDiff(mi, @startTime, @endTime)%(24*60)/60) + 'h '
        + convert(varchar(40), dateDiff(mi, @startTime, @endTime)%60) + 'm'
      when (((dateDiff(mi, @startTime, @endTime)%(24*60)/60)) > 0) then
          convert(varchar(40), dateDiff(mi, @startTime, @endTime)%(24*60)/60) + 'h '
        + convert(varchar(40), dateDiff(mi, @startTime, @endTime)%60) + 'm'
      else
          convert(varchar(40), dateDiff(mi, @startTime, @endTime)%60) + 'm'
    end as Format2
  , convert(varchar(40), dateDiff(mi, @startTime, @endTime)/(24*60)) + ':'
  + right('00' + convert(varchar(40), dateDiff(mi, @startTime, @endTime)%(24*60)/60), 2) + ':'
  + right('00' + convert(varchar(40), dateDiff(mi, @startTime, @endTime)%60), 2) as Format3
NOTE:
Make sure to keep the dateDiff() time in minutes. If you switch to hours (or days) you'll have rounding issues you'll have to work around. By sticking to using minutes for the equations, you avoid the rounding issues.

When you run this query in Microsoft SQL Server, you'll see that it outputs 3 columns—Format1, Format2 and Format3.

Format1 always outputs the time span in format 0d 0h 0m, even when the days or hours are 0. In my case, I really wanted to ignore days if no days have passed and hours if we're still under 60 minutes. This lead me to create Format2.

Format2 still uses the 0d 0h 0m format, but it will drop off days if less than 24 hours old and drop hours if less than 60 minutes has passed. This leaves you strings like "6d 4h 52m", "4h 10m" or "3m". In my case, this was the best formatting for my uses.

Format3 is an alternative format that places the time span in the format d:hh:mm. While I'm not using that formatting currently, some people may find it useful. This also shows off how you can force the hours and minutes to output as a digit value (i.e. 0:02:01 = 0 days, 2 hours and 1 minute.)

Hopefully this will prove useful to some of you.


Pagination in MSSQL 2005 with one-to-many joins

I was working on restructuring some old code that needed some pagination. The query in question used a one-to-many join the required information together. Imagine a search engine where you're wanting to search over orders, but want to group the results by customer. The output might look something like:

Gary Dell'Abate
  Order #: 12098
  Order #: 13232
  Order #: 14551
  Order #: 16770
Fred Norris
  Order #: 11021
  Order #: 11029
Robin Quivers
  Order #: 10010
  Order #: 11001
  Order #: 12001
Howard Stern
  Order #: 13001

So, in my situation I want to also paginate by the customers. When SQL Server 2005 was introduced, it added a new feature called Common Table Expressions (CTEs.) One of the most useful features of CTEs is to paginate recordsets. Typically when paginating results, you will use the row_number() function—which creates a new unique row number for each row in your recordset:

more…


Getting Microsoft SQL Server Management Studio to automatically log in to a server

I finally got tired enough of having to click the "Connect" button at SQL Server Management Studio startup, that I decided to look for a way to bypass this prompt. Since in development, I typically am only ever dealing with the same server, the connection dialog always added an extra step to the process of getting up and running.

So, a quick search on Google turned up a list of the following command prompt switches you can use to specify a server (and database) to use when starting up:

Usage:
sqlwb.exe [-S server_name[\instance_name]] [-d database] [-U user] [-P
password] [-E] [file_name[, file_name]] [/?]

[-S The name of the SQL Server instance to which to connect]
[-d The name of the SQL Server database to which to connect]
[-E] Use Windows Authentication to login to SQL Server
[-U The name of the SQL Server login with which to connect]
[-P The password associated with the login]
[file_name[, file_name]] names of files to load
[-nosplash] Supress splash screen
[/?] Displays this usage information

So, all you need to do to get SQL Server Management Studio to automatically log in to a specific server is to right-click on your SQL Server Management Studio icon, and change the "Target" to something like:

"C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\SqlWb.exe" -S SERVERNAME -d DATABASENAME

SQL Server Management Studio Properties Box


Useful tips and tricks for dealing with date/time in SQL

I've come across several clever and convenient tricks for dealing with dates recently in SQL Server. I previously blogged about how to remove time from date/time stamp in SQL Server, but I've come across a group of other really useful date/time manipulations that can be very useful when dealing with SQL Server.

The basic idea behind most of these tricks is that by using a known base datetime stamp, you can then use simple addition and subtraction to calculate the new dates. For example, if I calculate the difference in days between Jan 9, 2008 at 08:00 Jan 1, 2000 at 00:00, I'll end up with a whole number of days which I can then and add back to Jan 1, 2000 at 00:00 and I'll end up with Jan 9, 2008 at 00:00.

Take the tricks below and play around to see what other useful combinations you can develop. If you have any other useful date/time tricks, let me know!

more…


Dynamically generating a table of dates in Microsoft SQL (Part Deux!)

Yesterday I wrote a blog entry talking about dynamically generating a table of dates in MSSQL 2005. My original solution involved common table expressions (CTE) and involved recursion. While there's nothing wrong with using recursion this way, I mainly was using recursion to simplify the UDF code into a single SQL expression.

However, the more I started thinking about the problem last night, I decided I really needed to revise things a bit further. I decided I really needed to have the query return both a start and end date for each row in the recordset—as this would make it much easier to things such as find all the orders placed during each hour of the day.

So, this morning I came in and re-wrote my query and optimized a few things. The first thing I did was remove the recursion, so this UDF should now work in SQL Server 2000 as well as SQL Server 2005. It's important to note that you could do the same thing with a physical table, but for my uses my date range table generally would only include at most a few dozen rows and there's not enough of a performance difference to warrant the pain of maintaining a table of every date and possible combination you may want to return aggregate results over.

The new code looks like this:

more…


Dynamically generating a table of dates in MSSQL 2005

[UPDATED: Thursday, January 08, 2009 at 11:04:25 AM]

UPDATE:
I've revised this concept, so please see the new version: Dynamically generating a table of dates in Microsoft SQL (Part Deux!)

I'm working on some code where I need to show some trend date over various periods of time. One of the issues with working with SQL and aggregate values is that it'll only show you values where there was some data, so if you want to print the number of orders on each day and some days had no orders, they wouldn't show up in the results.

For what I was working on, I needed to ensure that even missing days would show up in my query results. This got me thinking about the best way to accomplish this task. After thinking about the problem for a while, I decided the ideal method would be if I could dynamically generate a table of dates that I could then use in a LEFT OUTER JOIN.

more…


Find the longest running queries on your MSSQL 2005 Server

[UPDATED: Wednesday, January 21, 2009 at 8:19:25 AM]

I was reading the SQL SERVER - 2008 - 2005 - Find Longest Running Query - TSQL on Pinal Dave's blog and thought I'd run the query on my development machine to see what interesting things I'd fine.

However, right off the bat I was seeing something I thought was odd—my execution times looked way too high. After some quick searching, I confirmed my initial thought that MSSQL reports time in microseconds (which is one millionth of a second or one thousandth of a millisecond.)

For my needs reporting in milliseconds is more than enough, so I revised the query to report the time in milliseconds instead of microseconds. I also made some other modifications so that the query breaks the actual SQL statement and extra parameters into 2 columns (I derived this code from the an article on MSSQLTips.com.) I also added last execution time and minimum elapsed times for the query. I also changed the query to show the results by the average elapsed time—since I occasionally you'll see a single query spike with a high maximum elapsed time that might be due to external circumstances (although this is good to know too.)

Here's my revised version:

select distinct top 10
    substring(t.text, (s.statement_start_offset/2)+1, (((case s.statement_end_offset when -1 then datalength(t.text) else s.statement_end_offset end) - s.statement_start_offset)/2) + 1) as SqlText
    , (case when s.statement_start_offset >
0 then substring(t.text, 0, (s.statement_start_offset/2)+1) else '' end) as SqlParameters
    , s.execution_count as ExecutionCount
    , (s.max_elapsed_time/1000) as MaxElapsedTimeMs
    , (s.min_elapsed_time/1000) as MinElapsedTimeMs
    , isnull((s.total_elapsed_time/1000) / nullif(s.execution_count, 0), 0) as AvgElapsedTimeMs
    , s.creation_time as LogCreatedOn
    , s.last_execution_time as LastExecutionTime
    , isnull(s.execution_count / nullif(datediff(s, s.creation_time, getdate()), 0), 0) as FrequencyPerSec
from
    sys.dm_exec_query_stats as s
        cross apply
    sys.dm_exec_sql_text(s.sql_handle) as t
order by
    isnull((s.total_elapsed_time/1000) / nullif(s.execution_count, 0), 0) desc

UPDATE:
I added another nullif() statement to further prevent divide by zero errors.

One thing to keep in mind is that the DBCC FREEPROCCACHE will clear the cache and reset the log (obviously a reboot of the server or restart of the services would do the same thing.) You may have some maintenance scripts that might issue the DBCC FREEPROCCACHE command, so just keep this in mind when looking at the results.

You can use the query above for lots of purposes. With a little refactoring, you could change the ORDER BY clause to return queries that had executed recently—which can be useful if you're trying to debug a slow running template.


Remove time from date/time stamp in SQL Server

I thought I've blogged this tip before, but I saw someone else mention it on a mailing list this morning. If you ever want to remove the time portion in a MSSQL Server, you can do this by removing the decimal portion of the date/time stamp.

SQL Server stores date/times as a numeric value where the integer portion of the value represents the number of days since epoch and the time is represented by the decimal portion of the value. This allows us to cast a datetime field as a float and then round down to the nearest integer:

select cast(floor(cast(getutcdate() as float)) as datetime)

We first convert the datetime to a float:

cast(getutcdate() as float)

We next use the floor() function to round down to the nearest integer:

more…


Division of integers in MS SQL

Yesterday I was trying to divide two values to calculate a percentage, but the value was always coming up zero. What really threw me off was I had a query similar to the following and that was working fine:

select
    floor((avg(Rating) / count(Rating)) * 100) as [Percentage]
from
    Ratings

The above query was properly returning the percentages I needed, but when I had a query like the following, all of the sudden it was just returning 0 (zero) as the result—which I knew was wrong:

select
    count(1) / (select count(1) from Sales where datediff(dd, OrderDate , getutcdate()) = 0 ) as SameDayShipPct
from
    Sales
where
     datediff(dd, OrderDate , getutcdate()) = 0
and
    datediff(dd, ShipDate , getutcdate()) = 0

The reason the above query kept returning zero, was because the SQL COUNT() function returns an INT. When MS SQL Server performs an operation on INT values, it returns an INT value. This means the division of two INT values will always return an INT.

The solution is to cast at least one of the two values to a FLOAT. The reason my first query was working fine is because the Rating column was a FLOAT column, which meant the division would return a FLOAT value.  The following is the same query, but will correctly return a FLOAT.

select
    cast(count(1) as float) / (select count(1) from Sales where datediff(dd, OrderDate , getutcdate()) = 0 ) as SameDayShipPct
from
    Sales
where
    datediff(dd, OrderDate , getutcdate()) = 0
and
    datediff(dd, ShipDate , getutcdate()) = 0

Take a look at the following in Query Analyzer to see how MS SQL Server handles the division operation:

declare @r1 float, @r2 float, @r3 float
set @r1 = 3/4
set @r2 = 3/4.0
set @r3 = 3/cast(4 as float)
print @r1
print @r2
print @r3

Just something to keep in mind when using division operations in SQL Server. I thought I could simple cast my expression to a FLOAT to fix the problem, but since the division operation has already converted the value to an INT the resulting value will still be based on the original INT value.

NOTE:
Always remember to watch out for potential "Divide by zero error encountered" errors when performing division operations in SQL server. This can easily crop up, so make sure your query accounts for the possibility. An easy workaround is do something like:

ISNULL(dividend / NULLIF(expression, 0), 0)

The NULLIF(expression, 0) will make the value return null if the expression ends up being 0. This will then make the division operation return null as the result. The ISNULL() will then convert the resulting null value so that it actually returns 0 instead of null.