Category Archives: Administration

Statistics IO, what does it show?

As another post mentioned I was recently at SQL Saturday in Manchester. One of the presentations raised a very interesting question about what statistics io shows when you have a forwarded record. I thought it would make an interesting blog post to investigate this question and find an answer as best I can. Fair warning, this is a long post and you can download the scripts and read the summary at the end. Most of the explanation is in the scripts. This is because for some reason at the moment code tags don’t seem to work on this theme and a nice syntax highlighter I thought might work breaks my site (I’ll get there and update previous posts though).

Lets start with the setup. First we create a view to make these scripts shorter (Trust me they’re long enough). We then create a simple table with two columns and show that it has one IAM page (I will post about IAM pages in more depth in future but Paul Randal has an excellent article on them) and one data page. You will need to run the setup for each script and each script starts with part 2. The below, standard blurb, applies to all of these scripts: –

These scripts were written by Rainer Unwin (c) 2015

You may alter this code for your own non-commercial purposes (e.g. in a
for-sale commercial tool).
You may republish altered code as long as you include this copyright and
give due credit, however you must obtain my prior permission before blogging
this code.

This code and information are provided “as is” without warranty of
any kind, either expressed or implied, including but not limited
to the implied warranties of merchantability and/or fitness for a
particular purpose. You use the script at your own risk and should
always test in a pre-production or test environment before running
in production.

The view: –

[code language=”sql”]
use tempdb;
go

if object_id( N’dbo.reads_pages’,’V’ ) is not null drop view dbo.reads_pages;
go
create view dbo.reads_pages
as
select is_iam_page
, allocated_page_file_id
, allocated_page_page_id
from sys.dm_db_database_page_allocations(
2
, object_id( ‘dbo.reads’, ‘U’ )
, 0
, 1
, ‘detailed’
)
where is_allocated = 1;
go
[/code]

Lets get an understanding of what stats io normally shows us

Script 1: –

[code language=”sql”]
— :: PART 1 – setup
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data char( 5000 ) );
go

— :: PART 2 – check with a single page

— insert a single record and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data ) values( 1, ‘a’ );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.
— lets try again with a second page of data.

— :: PART 3 – Now try with 2 data pages

insert dbo.reads( i, data ) values( 2, ‘b’ );
go

select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— again we see the number of reads of data pages only.
— we need to prove this is the case when we exceed the
— extent boundary as well so lets add another 10 pages

— :: PART 4 – lets use 12 pages which exceeds the extent boundary and conclusion

insert into dbo.reads( i, data )
values( 3, ‘c’ )
, ( 4, ‘d’ )
, ( 5, ‘e’ )
, ( 6, ‘f’ )
, ( 7, ‘g’ )
, ( 8, ‘h’ )
, ( 9, ‘i’ )
, ( 10, ‘j’ )
, ( 11, ‘k’ )
, ( 12, ‘l’ );
go

select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— If you have multiple files, as I do, you may also
— find that you now have multiple IAM pages. In my
— case I had 12 data pages and 2 IAM page. reads
— from the select were 12 matching the number of
— data pages
[/code]

Script 1 summary

So what this script shows is the behaviour of statistics io with no forwarded records. As you can clearly see it never shows that the IAM page is read. Logically, since we do an allocation order scan, this should be read. The take away is that only data pages are shown by statistics io.

Script 2 – What is shown for forwarded records where each forwarded record occupies its own page

[code language=”sql”]
— :: PART 1 – setup, use a varchar column to allow us to forward records
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data varchar( 8000 ) );
go

— :: PART 2 – check with a single page

— insert 7 records and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data )
values( 1, replicate( ‘a’, 7677 ) ) — fill the page
, ( 2, replicate( ‘b’, 50 ) )
, ( 3, replicate( ‘c’, 50 ) )
, ( 4, replicate( ‘d’, 50 ) )
, ( 5, replicate( ‘e’, 50 ) )
, ( 6, replicate( ‘f’, 50 ) )
, ( 7, replicate( ‘g’, 50 ) );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.

— :: PART 3 – Now generate 1 forwarded record
update dbo.reads
set data = replicate( ‘b’, 5000 )
where i = 2
go

— 3 pages: 1 IAM, 1 data & 1 for the forwarded record
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— now we see 3 page reads. This could be because we
— have read the IAM page to see where we should read
— the forwarded page or it could be that we jumped
— to the forwarded page and then back again

— :: PART 4 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘d’, 5000 )
where i = 4;
go

— 4 pages: 1 IAM , 1 data, 2 for forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 5 reads but how did we get there? if we read the IAM
— page then we would read just 4 pages. The IAM, the
— data page and the 2 forwarded pages. No we must have
— read the data page, followed the pointer, then back
— to the data page and repeat for the other forward
— pointer. Lets confirm with 1 more.

— :: PART 5 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘f’, 5000 )
where i = 3;
go

— 5 pages: 1 IAM , 1 data, 3 for forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— yes the pattern holds. Each new forwarded record adds
— two reads. One to go the the forwarded page and one
— to go back to the data page. In the next section you
— will see another interesting effect
[/code]

Script 2 summary

What we have shown here is that we count reading the data page and each page that has a forwarded record each time that we look at a record on it. So if you have 7 records, and records 2 an 4 are forwarded, then we read the page as follows (And count 5 reads): –

1: Read the main data page and id = 1 and 2 (Forwarding pointer).
2: Follow the forwarding pointer and read that data page.
3: Read the first data page again to read id = 3 and 4 (Forwarding pointer).
4: Follow the forwarding pointer and read that data page.
5: Read the first data page a third time to read id 5 through 7.

Script 3 – what if several forwarded records are on to the same page

[code language=”sql”]
— :: PART 1 – setup, use a varchar column to allow us to forward records
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data varchar( 8000 ) );
go

— :: PART 2 – check with a single page

— insert 5 records and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data )
values( 1, replicate( ‘a’, 7677 ) ) — fill the page
, ( 2, replicate( ‘b’, 50 ) )
, ( 3, replicate( ‘c’, 50 ) )
, ( 4, replicate( ‘d’, 50 ) )
, ( 5, replicate( ‘e’, 50 ) )
, ( 6, replicate( ‘f’, 50 ) )
, ( 7, replicate( ‘g’, 50 ) );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.

— :: PART 3 – Now generate a forwarded record
update dbo.reads
set data = replicate( ‘b’, 500 )
where i = 2;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 3 reads as expected.

— :: PART 4 – lets forward another record off the page but onto the same forwarding page
update dbo.reads
set data = replicate( ‘c’, 500 )
where i = 4;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 4 reads? But last time we got 5 reads. So it looks
— like when we use the same forwarding page then we
— get fewer reads. Lets try again

— :: PART 5 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘f’, 500 )
where i = 6;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— ok that is weird now we get 5 reads from the same
— 3 pages as before. What is going on here? We don’t
— count the forwarded page each time when it’s the
— same page. So whereas before every forwarded
— record had it’s own page now it doesn’t. We still
— count the main data page whenever we read it, but
— we don’t count the forwarding page if it is the
— same. Wow, so misleading!
[/code]

Script 3 summary

So what happens here is slightly different than before. We can show that if all the forwarded records are on the same page then we only count the page that contains the forwarded records once. This might not be what you would expect and that is because it is a lie! It is a big fat lie in fact. If you repeat the same test as in script 3 up to the end of part 4 and then do the below you can show that the reads are a lie: –

Open a new query window (I’ll call it query bail) and update dbo.reads where i = 3 but don’t commit
Then run the read from script 3, this will block at i = 3
Open another new query (I’ll call it update) and set data = replicate( ‘z’, 500 ) for i = 4, let this commit
Now rollback query bail

You will see that in fact we did re-visit the forwarded page to see the ‘z’s on there.

So there you have it, statistics io… It’s great, it’s really helpful and yet it’s dishonest to some degree as well. At least you now know what’s going on though 🙂

Scripts: –

Stats IO reads part 0 – create view
Stats IO reads part 1
Stats IO reads part 2
Stats IO reads part 3

How much data does a differential backup need to backup

I recently wanted to know how much of my database a differential backup would actually need to read and backup. I soon had an idea to scan the Differential Change Map (DCM) pages in the data files. To understand what these pages track lets have a quick review of how SQL Server stores data.

How does SQL Server store data?

SQL Server stores data for each entity in database pages, pages are 8KB chunks of data in the data files. These pages are organised into 64KB extents, or 8 contiguous pages. SQL Server also stores allocation pages that track the usage of pages/extents within the data files. One of these allocation pages is the DCM which tracks whether or not any part of any page in an extent has been modified since the last full backup. So if I had a data page on the 30th page in my database and this was amended since the last full backup then the relevant DCM page would show that the 4th extent had been modified since the last full backup.

The theory

The DCM pages track changes to extents since the last full backup. This allows SQL Server to determine, very efficiently, which extents need to be backed up in the next differential backup. The first DCM page is the 7th page (zero based so page 6) in the database. The below command can be used to view the page: –

[code language=”sql”]
DBCC PAGE( myDatabase, 1, 6, 3 ) WITH TABLERESULTS, NO_INFOMSGS;
GO
[/code]

Also DCM pages are repeated every 511,232 pages in the database. Given this information we can use the below pseudo code as a basis for developing our script: –

  1. Get a list of all the data files (Exclude log files) in our database
  2. Iterate over all the data files to load all the DCM pages
    1. Follow all the DCM pages in each file
    2. Read all the data in those DCM pages
    3. Save the relevant data into a temp table
  3. Using the result table work out how many pages in each DCM interval, the section of a data file that the DCM relates to, will be backed up
  4. Sum the results
  5. Generate a summary result set

The results

I have a sample output in a couple of tables below as a summary of a scan against a small test database. I have previously run a very similar script successfully against a multi-terabyte database. Every DCM page represents just under 4GB of data in your database so even at 4TB you would have only about 1,000 DCM pages to read. There is obviously a small amount of overhead for the temp tables and to get the file list. Thereafter for every DCM page there will be 14 page reads. So as you can see this is a pretty light-weight script and highly efficient.

Summary result set

total_pagesaltered_pagespercent_changed
2816273697.16

Detailed output, this shows one result line per contiguous set of extents where at least one record in that extent has been modified

file_idstart_pageend_pagealtered_pages
1027362736

The script

In order to use this script please change the database name on line 60. Then run the script. I would recommend running against a test or dev instance initially. Lastly in preference to running against a production instance I personally run this against a log-shipped database. Please let me know if this script proves useful or if you have any issues.

DCM_scanner

Index Depth for Partitioned Tables

One really useful function that Microsoft provide with SQL Server is the INDEXPROPERTY function. With it you can get the depth of an index, which is the number of pages in the index that need to be read to get from the root to the leaf. Assuming a table called dbo.test with an index pk_test you could use a query like the one below: –

[code language=”sql”]
SELECT INDEXPROPERTY(OBJECT_ID(N’dbo.test’, N’U’), N’pk_test’, ‘IndexDepth’);
[/code]

The problem with this in-built function is that it doesn’t work on partitioned tables, yielding a not so helpful NULL as a result. Now of course there is a way to get the index depth for each balanced tree structure of each partition with sys.dm_db_index_physical_stats. My issue here is that I don’t want that much information and I want to know how much load I will be putting on the server. Load here being measure in terms of reads and by extension buffer pool usage. So I rolled my own script to look up the index depth of all partitions of an index. We’ll get to that in a moment, first I thought I’d explain why I cared.

I was looking at some indexes that someone else had created with the DTA (Database Tuning Advisor). A very nice feature that works quite well when used properly. Not so great when someone feels that a scatter gun tactic of throwing all recommended indexes at a database would be clever. One place where the DTA falls down is that it doesn’t know what you were trying to achieve with your design. As such the DTA may well recommend a great index that breaks something else for you. For example with partitioned tables you can lose the ability to fast switch if you have non-aligned indexes. Here’s an example of something similar to what I had. Lets assume you have 100 partitions each of depth 3 (Which we don’t yet know but my script can help you there). You have a primary key which is a nice surrogate integer identity, often recommended but not always advisable. The partitioning key is a datetime column. Finally you also have a non partition aligned index on the primary key alone, depth 4. The clustering key is on the identity however in practice you have to be in the right partition so effectively it might as well be datetime, id instead.

Now if I wanted to check the existence of a record by looking up the ID, for example, then the optimiser would in this situation correctly conclude that the non-aligned index is ideal. 4 reads and we’re done. If we used the clustering key we’d need to access each partition and check each one because we cannot get the datetime field first (we only have the ID). Well that’s going to be 300 reads. 3 reads per partition. Yep them 4 reads sure are looking good right about now. Now the downside here was that this non-aligned index would prevent me fast switching partitions. So yes it was ideal for the data access but it was really bad for my data purge. Picture a very upset DBA about this time 🙂

I now had some work to do to sort this out, but before I did that I came across the annoyance with INDEXPROPERTY so I made a stored proc to help. The proc can be created in any DB and can then be pointed at any other DB, so I recommend putting it in a Tools or DBA database if you choose to use it. By the way it uses dynamic SQL. I really like dynamic SQL for it’s power and flexibility. In the right hands dynamic SQL is a really powerful tool, but only in the right places. I’ve seen been some really bad performance issues in the past where dynamic SQL was the right way to go and no other way would have been nearly as good. In this case I needed dynamic SQL to easily let me construct the query to get the root pages of each partition. I also needed dynamic SQL to capture the root page data with DBCC PAGE. The proc will check for the existence of the database, which should be the only part that might allow SQL injection, and bails out if the database cannot be found. Also there is error handling for this and anything deemed serious will be logged. Otherwise you should get a result set. This script should work in SQL Server 2005+ although I cannot test it on all versions. Please let me know if you use it and find it useful or find any errors.

Get index depth for partitioned tables

Indirect Checkpoints in SQL Server prior to 2012

Just a short post this time. I was preparing a transaction log demo for my local user group the other day. For this I needed to load a bunch of data into my test database without the log growing. Unfortunately I was loading the data so fast that the log grew several times. Now I’m a somewhat impatient kind of guy so I wanted a solution where I could load the data at maximum speed. I ran a few scripts to diagnose the issue and it turns out that SQL Server wasn’t checkpointing my database often enough.

As you are most likely aware Microsoft helpfully gave us indirect checkpoints in SQL Server 2012. These may well have solved the issue for me. I could simply have configured my database to a much shorter recovery interval and the rate of checkpoints would have increased. Unfortunately I happened to be running SQL Server 2008 R2 on this test server which meant I couldn’t use that feature. Now I could equally well have created a simple job to repeatedly run checkpoints. However I wanted a more flexible solution to the problem.

I ended up writing a very basic and potentially helpful script that checks how much of the transaction log is currently active before it runs the next checkpoint. I then added a delay so I could run it in a tight loop and checkpoint just after the log is 50% full. I was using the simple recovery model so this relates quite closely to when the last checkpoint was run and allows the log to clear after the next checkpoint. The thing that caught me out was that I wrote the script on my 2014 instance and wanted to deploy it on my 2008 R2 instance. Turns out that Microsoft added an extra column to DBCC LOGINFO so I got an error. A quick amendment later and I was ready to test my script. One major point of note is that this will only work to keep the log from growing if you have short transactions and are running in the simple recovery model. However if you just wanted to reduce the time that it takes recovery to run then simply increasing the checkpoint rate will work. No need for anything like my script for that.

The script is below (Slight edit on December 7th 2014 for DBCC command I forgot about), although with the loop removed and ready to be run in a job where you can set the seconds between executions with the job interval

ManualIndirectCheckpoint

Plan Cache Pollution

I was working on a pretty common and simple problem the other day. I’m sure you’ve all seen servers where there are plenty of single use plans in the plan cache. All those single use plans are just using up memory for no real gain. One major point to bear in mind with all this is that SQL Server is basically an in memory system. SQL Server doesn’t directly update the data on disk. The data is loaded into memory, updated in memory and then eventually flushed to disk. This means that anything that consumes memory is taking away resources that may, or may not, be better utilized. Lets take a closer look, albeit still an overview at what happens with a read/write, and then solve our plan cache bloat problem.

Review of how SQL Server reads/updates data

As I already stated SQL Server will not update data directly on disk. Instead SQL Server will load the data from disk into memory. During ramp up (After a reboot or instance restart and before target memory is reached) the I/O is at the extent level (8 contiguous pages on disk) although can be more with read ahead during scans. Once that SQL Server has achieved steady state the I/O is at the “Right” size. What does right mean? Well if SQL Server needs an extent or more it will read that extent or read ahead for more data during that I/O. However if SQL Server needs a single 8KB page then the I/O operation is an 8KB page. This read is represented internally as a PAGEIOLATCH_XX (Where XX is usually EX or SH for write or read respectively) if you look in sys.dm_os_wait_stats. You can use the below query to see all in memory page latches: –

[code language=”sql”]
SELECT *
FROM sys.dm_os_wait_stats
WHERE wait_type LIKE N’pagelatch%’; /* Change to pageiolatch% to see the input/output latches which are similar except for I/O */
[/code]

With our data in memory, lets assume a single 8KB data page, SQL Server will now take the appropriate latch. Most likely either a PAGELATCH_SH (Read) or a PAGELATCH_EX (Write). We’ll ignore transactional locks and all other synchronisation mechanisms, they’re well covered elsewhere and not what I wanted to cover, also this is a simplification for this article. If we’re reading from the page SQL Server will now process that operation, likewise if writing or updating a record SQL Server has the opportunity to do so. This operation is now entirely in memory.

In the case of a write SQL Server will also generate a log record which will be flushed/hardened to disk on commit (Well that’s a good enough explanation for this post but isn’t always entirely accurate). The record would also be hardened to disk on rollback however another compensation log record would also be created and hardened to disk, so we would write 2 log records in case of a rollback. After, and only after, the log record of the changes have been flushed to disk can the data page be written out to disk. It is also important to appreciate that the write of the data page may not happen right away.

So when is a data page written to disk? There are several internal operations that can flush a data page to disk, sorry if a I miss something out here but they are basically as below (For checkpoint I have listed some operations that cause checkpoints): –

  • Checkpoint – this writes all dirty pages (Pages in memory that have data changes not yet reflected on disk) for a database to disk
    • Automatically based on the server or database recovery interval
    • Adding or removing a data or log file to/from a database
    • Any kind of backup operation
    • Any kind of clean shutdown of a database
    • Creating a database snapshot
    • Anything that causes a database snapshot to be created under the covers (eg, DBCC CHECKDB)
    • Putting a database into read only mode
    • Manual checkpoints
  • Lazywriter – this process tries to keep some free buffers (Think of these as memory slots for pages) in the buffer pool and will flush the least recently used pages to disk as needed
  • Eager writing – this process goes to work during operations like bulk loads and will flush those pages that are written to disk very quickly

So you can appreciate that allocating memory to execution plans that won’t be used more than once wastes memory that could be used for data. That is until SQL Server ages those plans out automatically. Also more data pages in the buffer pool means less physical I/O and therefore shorter waits. This generally leads to better performance. I say generally because I don’t want you to run off and flush your plan cache, that would be a very bad idea.

The solution

Now we can get back to the meat of this post. If you run the below query, which might be pretty intensive depending on your server spec so test first, you’ll see how many single use plans you have cached. These are execution plans that SQL Server needed to generate to execute a query but have not been used since that one time. I have grouped the results for single use plans into plans created in the last hour and older:-

[code language=”sql”]
SELECT SUM(CASE WHEN execution_count = 1 AND creation_time > DATEADD(HOUR, -1, GETDATE()) THEN 1 ELSE 0 END) AS single_use_plan_count_last_hour,
SUM(CASE WHEN execution_count = 1 AND creation_time < DATEADD(HOUR, -1, GETDATE()) THEN 1 ELSE 0 END) AS single_use_plan_count_older,
SUM(CASE WHEN execution_count > 1 THEN 1 ELSE 0 END) AS multi_use_plan_count
FROM sys.dm_exec_query_stats
OPTION(RECOMPILE); /* don’t add more bloat to the plan cache */
[/code]

The below query will show you how much memory is being consumed by query plans, again broken down into single use plans, multi use plans and total size: –

[code language=”sql”]
SELECT CAST(SUM(CASE WHEN usecounts = 1 THEN size_in_bytes / 1024.0 / 1024 ELSE 0 END) AS NUMERIC(19,2)) AS single_use_plans_mb,
CAST(SUM(CASE WHEN usecounts <> 1 THEN size_in_bytes / 1024.0 / 1024 ELSE 0 END) AS NUMERIC(19,2)) AS other_plans_mb,
CAST(SUM(size_in_bytes / 1024.0 / 1024) AS NUMERIC(19,2)) AS total_plan_cache_mb
FROM sys.dm_exec_cached_plans
OPTION(RECOMPILE);
[/code]

There are a couple of solutions if you have a lot of single use plans in the plan cache. The first and most obvious is to set the “optimize for ad hoc workloads” flag. This can be found on the instance properties under the advanced tab in SSMS. I consider this to be a standard setting for most installs that I do. However sometimes this cannot be set for whatever reason and you have a rather large plan cache full of single use plans. In this case there are other options.

You’ll need to baseline your systems to decide on what normal levels are for your system. However using these queries should give you some insight into your systems plan cache. If you find that you are using a lot of memory on old single use plans then you might want to flush those. The chances are that they will be aged out before they are used again. If you suspect that the plans older than a certain age won’t be re-used before being aged out then the query at the end of this post can help you remove them more aggressively. The query gets all the single use plans older than a certain age and then loops through removing them. One small point that I cannot define for you is “What is a lot of memory allocated to single use plans?” Unfortunately that depends on your system and I cannot answer that. It depends on your workload and the amount of memory that your system has.

In the query below you need to adjust the waitfor delay at the bottom so that you don’t cause contention on the plan cache and the age of the plans that you want to remove. Otherwise you should be able to run it manually or from an agent job. I would personally recommend creating a procedure for it with the parameters you want but I don’t want to spoil the fun of reading it to understand the script before running it.

Just as an example for my Microsoft Surface the query made the below difference, the instance hadn’t been up for more than a few hours: –

Runsingle_use_plan_count_last_hoursingle_use_plan_count_oldermulti_use_plan_count
Before110158
After3058
Runsingle_use_plans_mbother_plans_mbtotal_plan_cache_mb
Before14.8916.7031.59
After0.4615.6916.15

Drop_ single_use_plans