Category Archives: Development

Attitude

In order to illustrate a point I thought I would share a story which is loosely based on some events in my life although the scenario and people have been changed.

Once upon a time there were a group of builders who were working in the middle of a blazing hot summer. The scale of the building project made it seem like no progress was being made, and everyone became increasingly disheartened. Everyone? Why no, there was one jolly fellow, named Jack, who whistled all day and always had a joke to tell. No matter how hot the day, how strong the sun beating down on him and how thirsty he became while working his attitude never wavered. “How is this possible?” thought his co-workers, and yet no-one had bothered to ask.

One day, even hotter than all the others, Jack was whistling away and cracking jokes like always. Suddenly one colleague, David, asked Jack how he could be so merry in such conditions. All of a sudden everyone stopped to listen to Jack’s answer. Jack thought about the question for what seemed like an eternity to everyone else. Then, all of a sudden, with a burst of laughter and an innocent grin Jack just chuckled “I just decided I would be happy!”.

The fact is that we can choose the job we work to at least some degree. We can change jobs, or stick around. We can choose how we do the job we are employed to do, as long as our employer keeps us in the job. We can even choose how we respond to what we are asked to do. Beyond that we cannot choose the exact work that we will do because we are being paid to do a specific job. Projects come and go and the project work must be done. It may be boring, it may be repetitive, it may frustrating. Yet only you choose your attitude. You have the ability to bring a good attitude or a bad attitude. You have the power to have a laugh at work or be bored. You have a choice to play around a bit, have a coffee and a good time. You get to choose how you interact with other people and with your work.

What does this have to do with agile? It’s simple. Someone else may give you a process to follow or a project to work on. Only you decide how will approach the work. If anyone says that they are: –

  • Bored
  • Cannot think of anything to improve in a retrospective
  • It’s impossible to release more quickly
  • The project sucks
  • The software stack is terrible
  • The existing codebase looks like horse vomit covering dog poop, and smells worse
  • etc.

You are still the people who choose how you respond. If you bring a positive attitude and try and find a way, or choose to just make the most of the time with your colleagues, you can still have a good (Or at least better) time at work.

So what attitude will you bring to work? How will you respond to what you see? Will you roll your eyes and raise a snarky remark, or will you crack a joke and make your worklife better? Most of us have resigned and moved jobs often enough to know that the next job will likely be very similar to the previous one. It’s not the job that matters, it’s what attitude you decide to bring to work. So, what will it be? Sad or happy?

Statistics IO, what does it show?

As another post mentioned I was recently at SQL Saturday in Manchester. One of the presentations raised a very interesting question about what statistics io shows when you have a forwarded record. I thought it would make an interesting blog post to investigate this question and find an answer as best I can. Fair warning, this is a long post and you can download the scripts and read the summary at the end. Most of the explanation is in the scripts. This is because for some reason at the moment code tags don’t seem to work on this theme and a nice syntax highlighter I thought might work breaks my site (I’ll get there and update previous posts though).

Lets start with the setup. First we create a view to make these scripts shorter (Trust me they’re long enough). We then create a simple table with two columns and show that it has one IAM page (I will post about IAM pages in more depth in future but Paul Randal has an excellent article on them) and one data page. You will need to run the setup for each script and each script starts with part 2. The below, standard blurb, applies to all of these scripts: –

These scripts were written by Rainer Unwin (c) 2015

You may alter this code for your own non-commercial purposes (e.g. in a
for-sale commercial tool).
You may republish altered code as long as you include this copyright and
give due credit, however you must obtain my prior permission before blogging
this code.

This code and information are provided “as is” without warranty of
any kind, either expressed or implied, including but not limited
to the implied warranties of merchantability and/or fitness for a
particular purpose. You use the script at your own risk and should
always test in a pre-production or test environment before running
in production.

The view: –

[code language=”sql”]
use tempdb;
go

if object_id( N’dbo.reads_pages’,’V’ ) is not null drop view dbo.reads_pages;
go
create view dbo.reads_pages
as
select is_iam_page
, allocated_page_file_id
, allocated_page_page_id
from sys.dm_db_database_page_allocations(
2
, object_id( ‘dbo.reads’, ‘U’ )
, 0
, 1
, ‘detailed’
)
where is_allocated = 1;
go
[/code]

Lets get an understanding of what stats io normally shows us

Script 1: –

[code language=”sql”]
— :: PART 1 – setup
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data char( 5000 ) );
go

— :: PART 2 – check with a single page

— insert a single record and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data ) values( 1, ‘a’ );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.
— lets try again with a second page of data.

— :: PART 3 – Now try with 2 data pages

insert dbo.reads( i, data ) values( 2, ‘b’ );
go

select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— again we see the number of reads of data pages only.
— we need to prove this is the case when we exceed the
— extent boundary as well so lets add another 10 pages

— :: PART 4 – lets use 12 pages which exceeds the extent boundary and conclusion

insert into dbo.reads( i, data )
values( 3, ‘c’ )
, ( 4, ‘d’ )
, ( 5, ‘e’ )
, ( 6, ‘f’ )
, ( 7, ‘g’ )
, ( 8, ‘h’ )
, ( 9, ‘i’ )
, ( 10, ‘j’ )
, ( 11, ‘k’ )
, ( 12, ‘l’ );
go

select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— If you have multiple files, as I do, you may also
— find that you now have multiple IAM pages. In my
— case I had 12 data pages and 2 IAM page. reads
— from the select were 12 matching the number of
— data pages
[/code]

Script 1 summary

So what this script shows is the behaviour of statistics io with no forwarded records. As you can clearly see it never shows that the IAM page is read. Logically, since we do an allocation order scan, this should be read. The take away is that only data pages are shown by statistics io.

Script 2 – What is shown for forwarded records where each forwarded record occupies its own page

[code language=”sql”]
— :: PART 1 – setup, use a varchar column to allow us to forward records
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data varchar( 8000 ) );
go

— :: PART 2 – check with a single page

— insert 7 records and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data )
values( 1, replicate( ‘a’, 7677 ) ) — fill the page
, ( 2, replicate( ‘b’, 50 ) )
, ( 3, replicate( ‘c’, 50 ) )
, ( 4, replicate( ‘d’, 50 ) )
, ( 5, replicate( ‘e’, 50 ) )
, ( 6, replicate( ‘f’, 50 ) )
, ( 7, replicate( ‘g’, 50 ) );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.

— :: PART 3 – Now generate 1 forwarded record
update dbo.reads
set data = replicate( ‘b’, 5000 )
where i = 2
go

— 3 pages: 1 IAM, 1 data & 1 for the forwarded record
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— now we see 3 page reads. This could be because we
— have read the IAM page to see where we should read
— the forwarded page or it could be that we jumped
— to the forwarded page and then back again

— :: PART 4 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘d’, 5000 )
where i = 4;
go

— 4 pages: 1 IAM , 1 data, 2 for forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 5 reads but how did we get there? if we read the IAM
— page then we would read just 4 pages. The IAM, the
— data page and the 2 forwarded pages. No we must have
— read the data page, followed the pointer, then back
— to the data page and repeat for the other forward
— pointer. Lets confirm with 1 more.

— :: PART 5 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘f’, 5000 )
where i = 3;
go

— 5 pages: 1 IAM , 1 data, 3 for forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— yes the pattern holds. Each new forwarded record adds
— two reads. One to go the the forwarded page and one
— to go back to the data page. In the next section you
— will see another interesting effect
[/code]

Script 2 summary

What we have shown here is that we count reading the data page and each page that has a forwarded record each time that we look at a record on it. So if you have 7 records, and records 2 an 4 are forwarded, then we read the page as follows (And count 5 reads): –

1: Read the main data page and id = 1 and 2 (Forwarding pointer).
2: Follow the forwarding pointer and read that data page.
3: Read the first data page again to read id = 3 and 4 (Forwarding pointer).
4: Follow the forwarding pointer and read that data page.
5: Read the first data page a third time to read id 5 through 7.

Script 3 – what if several forwarded records are on to the same page

[code language=”sql”]
— :: PART 1 – setup, use a varchar column to allow us to forward records
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data varchar( 8000 ) );
go

— :: PART 2 – check with a single page

— insert 5 records and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data )
values( 1, replicate( ‘a’, 7677 ) ) — fill the page
, ( 2, replicate( ‘b’, 50 ) )
, ( 3, replicate( ‘c’, 50 ) )
, ( 4, replicate( ‘d’, 50 ) )
, ( 5, replicate( ‘e’, 50 ) )
, ( 6, replicate( ‘f’, 50 ) )
, ( 7, replicate( ‘g’, 50 ) );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.

— :: PART 3 – Now generate a forwarded record
update dbo.reads
set data = replicate( ‘b’, 500 )
where i = 2;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 3 reads as expected.

— :: PART 4 – lets forward another record off the page but onto the same forwarding page
update dbo.reads
set data = replicate( ‘c’, 500 )
where i = 4;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 4 reads? But last time we got 5 reads. So it looks
— like when we use the same forwarding page then we
— get fewer reads. Lets try again

— :: PART 5 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘f’, 500 )
where i = 6;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— ok that is weird now we get 5 reads from the same
— 3 pages as before. What is going on here? We don’t
— count the forwarded page each time when it’s the
— same page. So whereas before every forwarded
— record had it’s own page now it doesn’t. We still
— count the main data page whenever we read it, but
— we don’t count the forwarding page if it is the
— same. Wow, so misleading!
[/code]

Script 3 summary

So what happens here is slightly different than before. We can show that if all the forwarded records are on the same page then we only count the page that contains the forwarded records once. This might not be what you would expect and that is because it is a lie! It is a big fat lie in fact. If you repeat the same test as in script 3 up to the end of part 4 and then do the below you can show that the reads are a lie: –

Open a new query window (I’ll call it query bail) and update dbo.reads where i = 3 but don’t commit
Then run the read from script 3, this will block at i = 3
Open another new query (I’ll call it update) and set data = replicate( ‘z’, 500 ) for i = 4, let this commit
Now rollback query bail

You will see that in fact we did re-visit the forwarded page to see the ‘z’s on there.

So there you have it, statistics io… It’s great, it’s really helpful and yet it’s dishonest to some degree as well. At least you now know what’s going on though 🙂

Scripts: –

Stats IO reads part 0 – create view
Stats IO reads part 1
Stats IO reads part 2
Stats IO reads part 3

Collations and text scan efficiency

SQL anti-patterns are an often misunderstood concept. One very basic anti-pattern is a leading wildcard character in a textual search. It can have really bad effects on a large table and yet there are always instances of it becoming a requirement. You see this all too often especially when reporting style queries are run on a transactional database. Lets say that for some reason we wanted to find all of our customers whose surnames end in “ith” like Smith for instance. We might write a query like the one below: –

[code language=”sql”]
SELECT FirstName, LastName, DOB
FROM dbo.table1
WHERE col1 LIKE LastName ‘%ith’;
[/code]

Even if we have an index on LastName the best we can do is to scan the index. An index in SQL Server, much like an index in a phone directory, is of little help to us here. If you were to try and find every surname in a phone book that ended with “ith” you’d have to read every entry in the index. Ok that’s still better than looking at every page in the phonebook. Although our query requests 3 columns which may not be in the index and therefore the lookups to the actual data may actually make the index scan and lookup slower than a scan of the entire data.

Anyway onto something quite trivial that can speed up such a query. If the term collation in this posts title jumped out at you as a potential performance improvement then you’re correct. Of course we could also use a full text index but that carries overhead and all sorts of other joys. Lets assume that either this isn’t a common query or you’re low on disk space and full text indexes are not an option just for this post. I might cover those in another post at some other point. A collation is a way of ordering something in our case the text that we are storing and the way that SQL Server can use different collations affects the query above. Let’s look at an example, it’s quite a long script so I’d suggest you copy and paste it and then read along after: –

[code language=”sql”]
SET NOCOUNT ON;
GO

IF OBJECT_ID( N’tempdb..#results’, ‘U’ ) IS NOT NULL DROP TABLE #results;
GO
CREATE TABLE #results( duration INT, run VARCHAR(30) );

— Create a test DB
USE [master];
GO
IF( SELECT DATABASEPROPERTYEX(N’TestDB’, ‘Collation’) ) IS NOT NULL DROP DATABASE TestDB;
GO

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’C:\SQLData\DataFiles\TestDB.mdf’ , SIZE = 20480KB , FILEGROWTH = 20480KB )
LOG ON
( NAME = N’TestDB_log’, FILENAME = N’C:\SQLData\LogFiles\TestDB_log.ldf’ , SIZE = 20480KB , FILEGROWTH = 20480KB )
GO

USE [TestDB];
GO

— Create and populate a relation
IF OBJECT_ID( N’dbo.LoadsOfData’, ‘U’ ) IS NOT NULL DROP TABLE dbo.LoadsOfData;
GO
CREATE TABLE dbo.LoadsOfData(
id INT IDENTITY PRIMARY KEY CLUSTERED
, filler CHAR(200) NOT NULL DEFAULT ‘a’ COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS — Notice the normal collation
);
GO

WITH n( n ) AS(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), vals( n ) AS (
SELECT ROW_NUMBER() OVER( ORDER BY n.n ) – 1
FROM n, n n1, n n2, n n3, n n4, n n5
)
INSERT INTO dbo.LoadsOfData( filler )
SELECT REPLICATE( CHAR( 48 + n % 10 ) + CHAR( 65 + n % 26 ) + CHAR( 97 + (n + 1) % 26 ) +
CHAR( 48 + (n + 3) % 10 ) + CHAR( 65 + (n + 5) % 26 ) + CHAR( 97 + (n + 7) % 26 )
, 30 )
FROM vals;
GO

— Time something that will scan
DECLARE @start DATETIME2( 7 ) = SYSDATETIME(), @a INT;

SELECT @a = COUNT(*)
FROM dbo.LoadsOfData
WHERE filler LIKE ‘%e%’;

SELECT @a = DATEDIFF( mcs, @start, SYSDATETIME() );

INSERT INTO #results( duration, run ) VALUES( @a, ‘With normal collation’ );
GO

— Change the collation
ALTER TABLE dbo.LoadsOfData ALTER COLUMN filler CHAR(200) COLLATE LATIN1_GENERAL_BIN; — Now make the column a binary collation
GO

— Try again
DECLARE @start DATETIME2( 7 ) = SYSDATETIME(), @a INT;

SELECT @a = COUNT(*)
FROM dbo.LoadsOfData
WHERE filler LIKE ‘%e%’;

SELECT @a = DATEDIFF( mcs, @start, SYSDATETIME() );

INSERT INTO #results( duration, run ) VALUES( @a, ‘With binary collation’ );
GO

— Yes the applied selects are inefficint but they take less space 🙂
SELECT duration, run
FROM #results
UNION ALL
SELECT (select duration from #results where run like ‘%normal%’) / (select duration from #results where run like ‘%binary%’), ‘Improvement multiple’;
[/code]

So lets go over what we’re doing for clarity.

  1. First we drop and then create a test database to use, always good to clean up
  2. We create a temp table to hold the results
  3. Create a table to use and populate some nonsensical test data
  4. Now we run our wildcard query and get the execution time. Note that we don’t return any results to the client so this is basically the execution as best as we can get it. We get the duration in microseconds
  5. Then we change the column to use a binary collation
  6. Now we re-run our test in the same way as before and again store the results in our temp table
  7. Finally we get the durations and a difference from the results table. I don’t care about the inefficiency here 🙂

As you can see, even with this small table there is a remarkable difference in duration and we see an improvement of about 15 times. This is better than I was expecting, I was expecting nearer 8 times which is more usual but the point is well made by the difference. Also in this case we didn’t have an index on the text column but because that is by far the majority of the data anyway it wouldn’t make that much difference.

Generating a sequential number list on the fly

I have often found a need to generate a sequential number list for a query, or at least that it would make the solution easier. This post looks at a way of achieving this. We’ll start with the premise that we need to generate a sequential number list starting from a specific number and of specific length. As a result of this assertion we need to be able to pass in these parameters.

The easiest way to solve this problem is to store a list of numbers in a table. This is surely simple and effective, the main drawback being we don’t know the numbers we’re going to need up front. If we use bigints anywhere in our code base we might need to cover off the entire range which means storing bigints from min to max. That would require a lot of storage space and may not be feasible. We will therefore look at another method. The main point being that I wish to generate these numbers on the fly. This in itself presents an issue which I will mention at the end of this post.

Lets start with something simple like the query below and build on it: –

[code language=”sql”]
select row_number() over(order by a.object_id)
from sys.all_columns a;
[/code]

This generates a fairly simple plan that is also very efficient. However unless we already have a huge number of columns this is severely limited in the size of result set returned. We can of course use a cross join back to the same DMV again to generate a larger list. This works however may not encompass the entire range we want (ie, not enough numbers generated) and there are more issues if we join this DMV more than twice. Let’s take a look at what happens. If we cross join this DMV twice or more (Four times here – please note that I am only using commas for brevity here, this is considered bad practice and doesn’t make it into my production code) then we start to see a sort operator in the execution plan: –

[code language=”sql”]
select row_number() over(order by a.object_id)
from sys.all_columns a,
sys.all_columns b,
sys.all_columns c,
sys.all_columns d,
sys.all_columns e;
[/code]

Sort operator on multi cross join

This sort operator presents a problem. A sort is a blocking operation which means that it must complete before we can return the result set to the next operator in the plan. Looking at the properties of this operator we can see that life is too short to wait for this complete.

Sort operator for multi corss join

We need to get rid of this operator, so lets play a simple trick on the opitmizer and make our lives a little easier. Rather than return a sorted list at this stage lets return all one’s. The optimizer is a pretty clever cookie a lot of the time and will realise that this won’t need to be sorted, a 1 cannot be determined to go before or after another 1 and be sorted. The downside is we don’t get our sequential numbers but that is a trivial problem to solve. So we have the below query (Which I won’t run, nor do I recommend it to you because the result set is massive).

[code language=”sql”]
select 1
from sys.all_columns a,
sys.all_columns b,
sys.all_columns c,
sys.all_columns d,
sys.all_columns e;
[/code]

Now we have removed the sort operator and we have a massive range of numbers to return. We’re most of the way there. Here is the new plan without the sort: –

No sort returning all ones

Ok the estimated cost is still huge and we’ve only got a massive list a one’s but lets solve both of those issues next. The great thing that we achieved by removing the sort operator is that we no longer have any blocking operators in our query plan. We can prove this by running the query. The select starts to return rows immediately, once you’ve confirmed this cancel the query because it takes too long to run and will most likely exceed execution memory. This is the case because when the query executes the select asks the next operator (The compute scalar) for a row to be returned. The compute scalar asks the loop join and so on and so forth until we get to the operators that are getting the rows. None of these operators block so the select is free to start returning rows as soon as we start the query. We can easily limit the result set with a top operator and we can get our number list back with the use of the row_number() function. To do so we just need to wrap the current result set returning it as a derived table. That would look like the below and we now have a query we can execute: –

[code language=”sql”]
select top (100000)
row_number() over(order by dt.i) as seq_no
from (
select 1
from sys.all_columns a,
sys.all_columns b,
sys.all_columns c,
sys.all_columns d,
sys.all_columns e
) dt (i);
[/code]

The plan that we get is also a lot nicer however because it’s of a decent size I suggest you look it over on your own computers. Now in this case I have used a specific top quantity but again this is trivial to sort. Lets create a function that gives us total flexibility here, also we’ll move the top operator for another slight efficiency gain: –

[code language=”sql”]
create function dbo.consecutive_numbers (
@top bigint
, @offset bigint
)
returns table
as
return
select row_number() over(order by v.i) + (@offset – 1) as number
from (
select top (@top) 1
from sys.all_columns a,
sys.all_columns b,
sys.all_columns c,
sys.all_columns d
) v (i);
[/code]

If we create this function, in say a “tools” database, we can then call it at any time to generate a numbers list. We can specify the required start point and the limit to be returned. Also because cross joining sys.all_columns 4 times generates a huge result set it is unlikely that we will exhaust the list of numbers we can generate. At any rate it’s far less likely than it is to exhaust our patience waiting for the result set to be returned. So if you don’t want to store a table to get at consecutive numbers then this might be a possible solution for you. I hope that this brief look at the optimizer proves useful to anyone looking for a solution to a similar problem. My final solution is presented below, it is almost identical but uses two inline table valued functions just because of the increased flexibility. Also I use just two cross joins because I wrote this for an OLTP environment. In case anyone is interested I did a speed run to compare the two variants and they were identical and only ever got to 33% CPU on my Surface Pro 3 so this is not hugely CPU intensive.

[code language=”sql”]
if object_id(N’dbo.many’) is not null drop function dbo.many;
go
create function dbo.many (
@top bigint
)
returns table
as
return
select top (@top) 1 as number
from sys.all_columns a,
sys.all_columns b,
sys.all_columns c;
go

if object_id(N’dbo.cons_nos’) is not null drop function dbo.cons_nos;
go
create function dbo.cons_nos (
@top bigint
, @offset bigint
)
returns table
as
return
select row_number() over(order by number) + (@offset – 1) as number
from dbo.many(@top)
go;
[/code]

If you have tried running the solution presented in this post you will most likely realised that there is a limitation with this solution. The size of the result set requested has a profound effect on performance. If you request a very large result set (Say over 100,000,000 records) then you might be waiting a while for the list to return, even on a high end modern server. In any case large result sets will always cost time so this is no different. However storing the required values ahead of time may prove a quicker solution which you should test if you need all the performance you can get.

Using OPTIMIZE FOR to affect plan operators

This is a post about an interesting plan change I noticed while experimenting having attended a presentation by Adam Machanic (Blog) at sqlbits. For anyone who doesn’t know who Adam is I suggest finding any videos of his previous PASS sessions, he is a very clever chap who deals with some unique problems giving him amazing insights into the optimizer. Adam was at sqlbits in the UK, great conference which I highly recommend by the way, talking about row goals. Once that the session is available online I would recommend people watch it. Truly great session about helping the optimizer choose the correct plan without forcing a fragile plan using “hints”. Incidentally this isn’t how I generate my sequential number lists at present, please see this post for my current solution.

If you’ve read my other posts you may have noticed that I don’t do as much development as DBA work. Maybe my posts don’t necessarily bear that out though. In any case I like developing T-SQL and I like to focus on efficiency where I can. There is no reason to leave performance to randomness and we should, in my opinion, attempt to guide the optimiser down the right path in how we write our queries. Now as most of you will know we can “Hint” (Better called force) certain plans with hints. However we can use hints in more subtle ways and this is where I came across a nice plan change while playing with what I learnt from Adam. I have used simpler queries below for illustration. First off the query was based on the below, this gave me the flexibility to change any part of the record set I wanted to as well as the expected row counts (I used SQL Server 2014): –

[code language=”sql”]
declare @i int = 20
, @j int = 20
, @k int = 20;

select row_number() over(order by c.i)
from (
select top (@i)
[object_id]
from sys.all_columns
) c (i)
cross join (
select top (@j)
[object_id]
from sys.all_columns
) d (i)
cross join (
select top (@k)
[object_id]
from sys.all_columns
) e (i);
[/code]

Ordinarily I would not be using the top statements or sub-queries because this is just a nice way to generate a number list (In this case 1 – 8,000). However I put this in a while loop to run 1,000 times to do some performance checking on it. I didn’t run the exact same query because I didn’t want inserts to be the main bottleneck however this query better illustrates what I want to show. The execution time on my Surface Pro 3 was about 5.2 seconds. The plan had a nasty little sort operator in it. I wondered if I could make this disappear.

row_goals_with_sort

I changed the script to the below (After some playing around with some optimize for values) and got a rather nice improvement in both the plan and the run time. The latter was down to 1 second.

[code language=”sql”]
declare @i int = 20
, @j int = 20
, @k int = 20;

select row_number() over(order by c.i)
from (
select top (@i)
[object_id]
from sys.all_columns
) c (i)
cross join (
select top (@j)
[object_id]
from sys.all_columns
) d (i)
cross join (
select top (@k)
[object_id]
from sys.all_columns
) e (i)
option (optimize for (@i = 40, @j = 30, @k = 30));
[/code]

What struck me was that the first query had a 62MB memory grant and a rather expensive sort that didn’t exist when I hinted larger numbers. However the largest number hinted had to be the first (@i) and also had to exceed the others by a certain value. I tracked this performance improvement down to a merge concatenation as opposed to straight up concatenation. The later being more expensive because it sorts all the records not just the first join (Which is sufficient in this query).

row_goals_with_merge_concat

More impressively when I used larger numbers for the declared variables I still needed the hint to benefit from the plan shape I had generated, even when these were identical to the values I was passing in. Again I benefitted massively from not needing a sort space memory grant (Which due to memory limitations on my Surface rapidly spills to disk) and not needing the sort at all. On larger values this was very easily noticeable. The same 5x performance improvement was noticeable for almost any range of values in the declares. However to make them show I did need to use a loop, as described before, because generating 30,000 numbers is near instant. At 27,000,000 the runtime changed between 22 seconds and 4.5 seconds. Again the later was with the optimize for hint.

I would like to finish this post with another big thank you to Adam Machanic and what I learnt at SQLBits. I will be doing some more playing in this area and there may be some more posts to follow.

T-SQL Binary Search

I’m sure you can all remember one of those interview questions about divide and conquer. Well it turned out that I tried to find a documented solution to just such a problem in T-SQL the other day and couldn’t. Long story short I decided to document it for myself and anyone else who finds it useful. The script is at the end if you don’t want to read the background and why it was useful.

A little background first. I was working for a company that had some rather large tables (Hundreds of GBs and hundreds of millions of rows). To make matters worse the tables were often heavily accessed. It was a data warehouse being used more like an OLTP environment. This presented a number of problems. One such problem was that if you didn’t have the right indexes on a table then you’d have difficulties creating it. Given the size of their already monolithic database no-one wanted to see another index added for a few bad queries.

One of these queries included a regular request to look up a range of dates in an audit table. Unfortunately someone had made the decision not to put an index on the datetime field. Fortunately the table was clustered on an identity. This made for a nice situation which wasn’t too hard to solve. The table, as previously mentioned, was an audit table which meant that both the identity and the datetime of modification were written to with incremental values, they were surrogates of one another.

The solution was to go to Google/Bing and look up “T-SQL binary search”. Unfortunately when I did so that wasn’t much help. I didn’t need a description of the solution, I wanted a script. One I could easily modify would be pretty cool, one that worked out of the box rather better. A little visualization might be helpful.

Table data visual

So at present we can only scan the cluster, well without the solution I’m about to describe anyway. This is obviously a very costly operation. What’s even worse is that an index on datetime still wouldn’t itself lead to an ideal plan as I’ll show later on. In any case we don’t have a helpful index so let’s get creative.

First a few assumptions, some of these are for simplicity and some will become clear later: –

  • There are no gaps in the identity column (Makes the script easier, please note SQL Server makes no such guarantee)
  • As stated this works because both identity and datetime are ever increasing
  • The cluster is a depth 4 index (It was in my case)
  • An index on the datetime column would be depth 4
  • Assume we’re trying to read 100,000 records
  • Assume we have 50 rows per data page
  • The range of required records is contiguous, ie between two dates (as was the case for me)

A pseudo code solution is provided below: –

  1. Let T be our target datetime value
  2. Get the sum of min and max values for our range
    1. Initially this is min and max from table, thereafter we move the incorrect value
  3. Divide the sum by 2 to get the halfway point
  4. Read through the cluster and get the datetime (Value X)
  5. Get the datetime of the next record along (Value Y)
  6. If X < T and Y >= T we have a match, break out of the loop
  7. If X > T make our new max value equal to X
  8. If Y < T make our new min value equal to Y
  9. Go to 2 with min or max value altered

We can follow the above twice, once for the starting value and once for the ending value. This will give us a range to scan in the cluster. Let’s have a think about the I/O requirements here and compare them to other options.

Clustered index scan = 100M records at 50 records per page = 2M reads. That’s not good.

Let’s examine a hypothetical index for the datetime field: –

  1. If we had an index on datetime (We don’t remember) and it was a depth 4 index with 404 records per index page then we would get the below I/O: –
  2. 4 reads to find the first record
  3. 223 pages in a partial index scan
  4. 400,000 reads through the cluster (Most would be in memory) as we loop join from the index into the cluster

Total = 400,227 reads.
Ok that’s a lot better but is it the best?

Let’s look at the binary search and scan.

  1. We have to do 8 reads through the cluster to get the min and max values for the identity column of the table.
  2. We then need to calculate the mid-point (No reads).
  3. From there we have to pull out 2 successive values at 8 reads for both values. This is done repeatedly until the record is found. This is deterministic, with an integer it cannot be more than 32 times, with a bigint not more than 64. In our case 27 trips will suffice so that’s 216 reads
  4. We then need to repeat this for the upper value (The min and max could be stored but it’s not worth it so my script doesn’t).
    Total so far (8 + 216) * 2 = 448
  5. Now we scan the cluster, 4 reads to get to the correct record and 2,000 pages in the cluster need to be scanned.

Total = 2,452 WOW! (815 times better than the clustered index scan and 163 times better than the datetime index). To top that off we didn’t pollute the buffer pool anywhere near as much.

We just did a lot better than the index we might have wanted to create. Not only that but we also didn’t need the overhead of storing that other index on datetime nor the overhead of keeping it up to date. This was just a simple example of how a potentially poor schema decision lead to a much better solution through a little engineering and creativity. All those interview questions really do make sense and have their uses 🙂

For anyone who thought about it, yes you can solve this with even fewer reads, if we had a datetime index and looked up the min and max values we could get the cluster keys from that. In that case we would have the below situation: –

  1. Non clustered seek, 4 reads, to get the minimum ID value.
  2. Non clustered seek, 4 reads, to get the maximum ID value.
  3. Clustered seek, 4 reads, to get to the first page to start our scan.
  4. Scan along the cluster, 2,000 reads, to get all the data.

The total here is 2,012 pages. However we have the extra space for the index, the logging overhead and the writes to the index to keep it up to date. Yes it’s more useful, but in my case not worth it.

Summary

OptionPage reads% of tableIndex overhead
Table scan2,000,000100%No
Index seek and lookup400,22720.0%Yes
Binary search and partial scan2,4520.123%No
Manual seek and partial scan2,0120.1%Yes

My script for anyone who wants it.