Tag Archives: Internals

Statistics IO, what does it show?

As another post mentioned I was recently at SQL Saturday in Manchester. One of the presentations raised a very interesting question about what statistics io shows when you have a forwarded record. I thought it would make an interesting blog post to investigate this question and find an answer as best I can. Fair warning, this is a long post and you can download the scripts and read the summary at the end. Most of the explanation is in the scripts. This is because for some reason at the moment code tags don’t seem to work on this theme and a nice syntax highlighter I thought might work breaks my site (I’ll get there and update previous posts though).

Lets start with the setup. First we create a view to make these scripts shorter (Trust me they’re long enough). We then create a simple table with two columns and show that it has one IAM page (I will post about IAM pages in more depth in future but Paul Randal has an excellent article on them) and one data page. You will need to run the setup for each script and each script starts with part 2. The below, standard blurb, applies to all of these scripts: –

These scripts were written by Rainer Unwin (c) 2015

You may alter this code for your own non-commercial purposes (e.g. in a
for-sale commercial tool).
You may republish altered code as long as you include this copyright and
give due credit, however you must obtain my prior permission before blogging
this code.

This code and information are provided “as is” without warranty of
any kind, either expressed or implied, including but not limited
to the implied warranties of merchantability and/or fitness for a
particular purpose. You use the script at your own risk and should
always test in a pre-production or test environment before running
in production.

The view: –

[code language=”sql”]
use tempdb;
go

if object_id( N’dbo.reads_pages’,’V’ ) is not null drop view dbo.reads_pages;
go
create view dbo.reads_pages
as
select is_iam_page
, allocated_page_file_id
, allocated_page_page_id
from sys.dm_db_database_page_allocations(
2
, object_id( ‘dbo.reads’, ‘U’ )
, 0
, 1
, ‘detailed’
)
where is_allocated = 1;
go
[/code]

Lets get an understanding of what stats io normally shows us

Script 1: –

[code language=”sql”]
— :: PART 1 – setup
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data char( 5000 ) );
go

— :: PART 2 – check with a single page

— insert a single record and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data ) values( 1, ‘a’ );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.
— lets try again with a second page of data.

— :: PART 3 – Now try with 2 data pages

insert dbo.reads( i, data ) values( 2, ‘b’ );
go

select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— again we see the number of reads of data pages only.
— we need to prove this is the case when we exceed the
— extent boundary as well so lets add another 10 pages

— :: PART 4 – lets use 12 pages which exceeds the extent boundary and conclusion

insert into dbo.reads( i, data )
values( 3, ‘c’ )
, ( 4, ‘d’ )
, ( 5, ‘e’ )
, ( 6, ‘f’ )
, ( 7, ‘g’ )
, ( 8, ‘h’ )
, ( 9, ‘i’ )
, ( 10, ‘j’ )
, ( 11, ‘k’ )
, ( 12, ‘l’ );
go

select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— If you have multiple files, as I do, you may also
— find that you now have multiple IAM pages. In my
— case I had 12 data pages and 2 IAM page. reads
— from the select were 12 matching the number of
— data pages
[/code]

Script 1 summary

So what this script shows is the behaviour of statistics io with no forwarded records. As you can clearly see it never shows that the IAM page is read. Logically, since we do an allocation order scan, this should be read. The take away is that only data pages are shown by statistics io.

Script 2 – What is shown for forwarded records where each forwarded record occupies its own page

[code language=”sql”]
— :: PART 1 – setup, use a varchar column to allow us to forward records
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data varchar( 8000 ) );
go

— :: PART 2 – check with a single page

— insert 7 records and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data )
values( 1, replicate( ‘a’, 7677 ) ) — fill the page
, ( 2, replicate( ‘b’, 50 ) )
, ( 3, replicate( ‘c’, 50 ) )
, ( 4, replicate( ‘d’, 50 ) )
, ( 5, replicate( ‘e’, 50 ) )
, ( 6, replicate( ‘f’, 50 ) )
, ( 7, replicate( ‘g’, 50 ) );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.

— :: PART 3 – Now generate 1 forwarded record
update dbo.reads
set data = replicate( ‘b’, 5000 )
where i = 2
go

— 3 pages: 1 IAM, 1 data & 1 for the forwarded record
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— now we see 3 page reads. This could be because we
— have read the IAM page to see where we should read
— the forwarded page or it could be that we jumped
— to the forwarded page and then back again

— :: PART 4 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘d’, 5000 )
where i = 4;
go

— 4 pages: 1 IAM , 1 data, 2 for forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 5 reads but how did we get there? if we read the IAM
— page then we would read just 4 pages. The IAM, the
— data page and the 2 forwarded pages. No we must have
— read the data page, followed the pointer, then back
— to the data page and repeat for the other forward
— pointer. Lets confirm with 1 more.

— :: PART 5 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘f’, 5000 )
where i = 3;
go

— 5 pages: 1 IAM , 1 data, 3 for forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— yes the pattern holds. Each new forwarded record adds
— two reads. One to go the the forwarded page and one
— to go back to the data page. In the next section you
— will see another interesting effect
[/code]

Script 2 summary

What we have shown here is that we count reading the data page and each page that has a forwarded record each time that we look at a record on it. So if you have 7 records, and records 2 an 4 are forwarded, then we read the page as follows (And count 5 reads): –

1: Read the main data page and id = 1 and 2 (Forwarding pointer).
2: Follow the forwarding pointer and read that data page.
3: Read the first data page again to read id = 3 and 4 (Forwarding pointer).
4: Follow the forwarding pointer and read that data page.
5: Read the first data page a third time to read id 5 through 7.

Script 3 – what if several forwarded records are on to the same page

[code language=”sql”]
— :: PART 1 – setup, use a varchar column to allow us to forward records
set nocount on;
go

use tempdb;
go

— create an object to play with
if object_id( N’dbo.reads’, ‘U’ ) is not null drop table dbo.reads;
go

create table dbo.reads( i int, data varchar( 8000 ) );
go

— :: PART 2 – check with a single page

— insert 5 records and check the structure of the table
— then confirm that we get a single read (The data page only)
insert dbo.reads( i, data )
values( 1, replicate( ‘a’, 7677 ) ) — fill the page
, ( 2, replicate( ‘b’, 50 ) )
, ( 3, replicate( ‘c’, 50 ) )
, ( 4, replicate( ‘d’, 50 ) )
, ( 5, replicate( ‘e’, 50 ) )
, ( 6, replicate( ‘f’, 50 ) )
, ( 7, replicate( ‘g’, 50 ) );
go

select *
from dbo.reads_pages;
go

set statistics io on;
go

select *
from dbo.reads;
go
— just 1 read (The data page). The IAM page is not read.

— :: PART 3 – Now generate a forwarded record
update dbo.reads
set data = replicate( ‘b’, 500 )
where i = 2;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 3 reads as expected.

— :: PART 4 – lets forward another record off the page but onto the same forwarding page
update dbo.reads
set data = replicate( ‘c’, 500 )
where i = 4;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— 4 reads? But last time we got 5 reads. So it looks
— like when we use the same forwarding page then we
— get fewer reads. Lets try again

— :: PART 5 – lets forward another record off the main data page
update dbo.reads
set data = replicate( ‘f’, 500 )
where i = 6;
go

— 3 pages: 1 IAM, 1 data & 1 for both forwarded records
select *
from dbo.reads_pages;
go

select *
from dbo.reads;
go
— ok that is weird now we get 5 reads from the same
— 3 pages as before. What is going on here? We don’t
— count the forwarded page each time when it’s the
— same page. So whereas before every forwarded
— record had it’s own page now it doesn’t. We still
— count the main data page whenever we read it, but
— we don’t count the forwarding page if it is the
— same. Wow, so misleading!
[/code]

Script 3 summary

So what happens here is slightly different than before. We can show that if all the forwarded records are on the same page then we only count the page that contains the forwarded records once. This might not be what you would expect and that is because it is a lie! It is a big fat lie in fact. If you repeat the same test as in script 3 up to the end of part 4 and then do the below you can show that the reads are a lie: –

Open a new query window (I’ll call it query bail) and update dbo.reads where i = 3 but don’t commit
Then run the read from script 3, this will block at i = 3
Open another new query (I’ll call it update) and set data = replicate( ‘z’, 500 ) for i = 4, let this commit
Now rollback query bail

You will see that in fact we did re-visit the forwarded page to see the ‘z’s on there.

So there you have it, statistics io… It’s great, it’s really helpful and yet it’s dishonest to some degree as well. At least you now know what’s going on though 🙂

Scripts: –

Stats IO reads part 0 – create view
Stats IO reads part 1
Stats IO reads part 2
Stats IO reads part 3

Interesting issue with heaps and page free space

First of all I would like to thank Uwe Ricken who alerted me to this point at the recent SQL Saturday in Manchester. I would also like to thank the organisers and sponsors of SQL Saturday, especially our local user group leader Chris Testa-O’Neill.

I recently blogged, to a small extent, about allocation pages. One of the allocation pages I blogged about was the Page Free Space (Or PFS for short) page. This tracks more than just the free space on a page. In fact it only tracks the free space in heap pages. It also tracks whether or not a page is an IAM page (Also briefly mentioned in my blog post about allocation pages) and a few other things. Please see this blog post by Paul Randal of SQL Skills for a fuller description of PFS pages.

Now the main point I want to call out is that the PFS page isn’t very granular in it’s ability to track page fullness. It tracks where in the following range of percent fullness a page is: –

  • empty
  • 1 to 50% full
  • 51 to 80% full
  • 81 to 95% full
  • 96 to 100% full

It might be clear from these values that large, wide, records in a heap could cause problems. However if not please allow me to demonstrate with a small demo followed by an explanation. Try the below script in SSMS: –

[code language=”sql”]
— Use tempdb for simplicity
USE [tempdb];
GO

— Drop test object we intend to use and then create a test table to use
IF OBJECT_ID( N’dbo.demo’, ‘U’ ) IS NOT NULL DROP TABLE dbo.demo;
GO

CREATE TABLE dbo.demo(
data CHAR( 2000 )
);
GO

— We insert 4 values in one transaction
INSERT INTO dbo.demo( data )
VALUES( ‘a’ )
, ( ‘b’ )
, ( ‘c’ )
, ( ‘d’ );
GO

— Lets check the number of data pages (Should be 1 as these 4 rows fit)
SELECT allocated_page_file_id
, allocated_page_page_id
FROM sys.dm_db_database_page_allocations(
DB_ID( N’tempdb’ )
, OBJECT_ID( N’dbo.demo’, ‘U’ )
, 0
, 1
, ‘DETAILED’
)
WHERE is_allocated = 1
AND page_type = 1 — data page;
GO

— Now lets try that again with 4 separate transactions
TRUNCATE TABLE dbo.demo;
GO

INSERT INTO dbo.demo( data ) VALUES( ‘a’ );
INSERT INTO dbo.demo( data ) VALUES( ‘b’ );
INSERT INTO dbo.demo( data ) VALUES( ‘c’ );
INSERT INTO dbo.demo( data ) VALUES( ‘d’ );
GO

— Well isn’t that odd we now have 2 data pages
SELECT allocated_page_file_id
, allocated_page_page_id
FROM sys.dm_db_database_page_allocations(
DB_ID( N’tempdb’ )
, OBJECT_ID( N’dbo.demo’, ‘U’ )
, 0
, 1
, ‘DETAILED’
)
WHERE is_allocated = 1
AND page_type = 1 — data page;
GO
[/code]

Explanation of the issue: –

In a heap SQL Server uses the PFS page to track the amount of free space in a page. It does not read each individual page in the heap (Which would make large heaps unusable). When you add records in a single transaction (Like the first insert in the script) SQL Server will add the records until they no longer fit on the page and will then allocate another page and keep adding records there. However with individual transactions (Like the second set of inserts) the PFS page is checked each time we attempt to insert a record. Since the 4 records fit on a single page when we add them in a single transaction SQL Server fills the page (Well near enough – there’s actually 60 bytes left on the page with the records that I created) so we know that they should fit. However because we check the PFS page at the start of each insert for the second part of the demo with individual transactions the inserts go something like this: –

Insert 1: No data pages allocated so allocate a data page and insert 1 record.
Insert 2: Check the PFS page, our page is 25% full so shows as 50% fill (Due to the lack of granularity in how we track free space) we therefore insert the record on the same page.
Insert 3: Check the PFS page, our page is about 50% full and shows as 50% full, we therefore insert the record on the same page.
Insert 4: Check the PFS page, our page is about 75% full and shows as 80% full. The PFS page shows that there is not enough space for a row of 25% the size of the page. We therefore allocate a new page and insert the record there.

The consequence of this optimisation is that many heap pages for tables with wide rows are likely to have quite a bit of free space. Please note that this is not the case with a clustered index (Or a non clustered index but I’m just looking at table data to draw your attention to this) because in an index the amount of free space on a page is attained from the actual data pages and not the PFS pages.

To wrap up I’d like to point out that this issue can be resolved with the below command, however please be aware that this, similarly to rebuilding a clustered index, will force an update of all non-clustered indexes on the table, this command has worked since at least 2008 R2: –

[code language=”sql”]
ALTER TABLE dbo.demo REBUILD;
GO
[/code]

Allocation Pages Overview

It can often help to understand how a database actually stores data. While the way in which SQL Server stores data and the allocation units it uses are well documented this is my version for my web site. To gain a deeper understanding I would recommend you visit Paul Randal’s blog. Relevant sections to this post can be found here near the bottom. As Paul worked on the storage engine team at Microsoft, and managed the team for a period as well, his knowledge is beyond question.

Types of allocation unit in SQL Server

SQL Server has two different types of allocation unit, bytemap and bitmap. This is just a reference to how SQL Server represents the storage unit that the allocation unit pages relate to. The bitmap pages represent 64KB extents (A collection of 8 * 8KB pages) with a single bit. There are 5 bitmap pages as below: –

  • Global Allocation Map (GAM)
  • Shared Global Allocation Map (SGAM)
  • Bulk Changed Map (BCM) – Also known as an ML page or Minimally Logged change map page
  • Differential Change Map (DCM)
  • Index Allocation Map (IAM) – these are special cases and not covered further in this post

There is also a bytemap page which represents individual 8KB pages with a single byte. This page type is as below

  • Page Free Space (PFS) page

Bytemap pages

The PFS page tracks the below information about database pages: –

  • Amount of free space on a heap page
  • Presence of ghost records on index pages (Including clustered index leaf pages)
  • Is this an IAM page
  • Is this a mixed page
  • Is this page allocated

This information is represented by an entire byte per page and each PFS page can track 8,088 database pages. This equates to just under 64MB of data pages. Therefore every 8,088 pages (64MB) in the data files of your database you will find a PFS page. You can confirm this with the below commands: –

[code language=”sql”]
/* Alter the database name to be a database where you have more than 64MB of data in the primary file group */
DBCC PAGE (DatabaseName, 1, 1, 0) WITH TABLERESULTS;
DBCC PAGE (DatabaseName, 1, 8088, 0) WITH TABLERESULTS;
GO
[/code]

On line 14 you should see a row for m_type and a value of 11 which is a PFS page.

Bitmap pages (except IAMs)

All the bitmap pages track data at the extent level. An extent is a collection of 8 * 8KB pages, so 64KB of data. Extents come in two types: –

  • Mixed extents which have data or index pages from more than one entity
  • Uniform extents which have pages relating to a single entity.

I use the term entity to mean, in simple terms, an index of a table. In truth it goes down to the partition and allocation unit type (in row data, row overflow data, lob data) but we’ll skip that for this explanation. Since we can track 64KB with a single bit we can track just under 4GB of data with a single bitmap page. These pages recur every 511,232 pages. This can be confirmed with the below statements if you have any data files that are greater than 4GB and I’ll put the statements together as though the data is in data file 1: –

[code language=”sql”]
/* Alter the database name to be the database where you have enough data, we’ll look at 2 GAM pages */
DBCC PAGE (DatabaseName, 1, 2, 0) WITH TABLERESULTS;
DBCC PAGE (DatabaseName, 1, 511232, 0) WITH TABLERESULTS;
GO
[/code]

On line 14 you will be able to confirm that the page type (m_type) is the same for these pages. Pages of interest are located at the below pages and then every 511,232 pages thereafter (I have included the page types so you know what to look for on line 14 of a DBCC PAGE print): –

  • Page 2 – GAM – type 8
  • Page 3 – SGAM – type 9
  • Page 6 – DCM – type 16
  • Page 7 – BCM/ML Map– type 17

This has been an overview of the types of allocation pages that SQL Server uses. There will be other future posts to go into more depth on each of these topics. However for now I will keep this short because a full post on allocation units and pages would be far too big 🙂

If you find the way that the storage engine tracks pages and objects interesting I fully recommend the posts of Paul Randal as previously mentioned or feel free to wait for future posts in this area.

Collations and text scan efficiency

SQL anti-patterns are an often misunderstood concept. One very basic anti-pattern is a leading wildcard character in a textual search. It can have really bad effects on a large table and yet there are always instances of it becoming a requirement. You see this all too often especially when reporting style queries are run on a transactional database. Lets say that for some reason we wanted to find all of our customers whose surnames end in “ith” like Smith for instance. We might write a query like the one below: –

[code language=”sql”]
SELECT FirstName, LastName, DOB
FROM dbo.table1
WHERE col1 LIKE LastName ‘%ith’;
[/code]

Even if we have an index on LastName the best we can do is to scan the index. An index in SQL Server, much like an index in a phone directory, is of little help to us here. If you were to try and find every surname in a phone book that ended with “ith” you’d have to read every entry in the index. Ok that’s still better than looking at every page in the phonebook. Although our query requests 3 columns which may not be in the index and therefore the lookups to the actual data may actually make the index scan and lookup slower than a scan of the entire data.

Anyway onto something quite trivial that can speed up such a query. If the term collation in this posts title jumped out at you as a potential performance improvement then you’re correct. Of course we could also use a full text index but that carries overhead and all sorts of other joys. Lets assume that either this isn’t a common query or you’re low on disk space and full text indexes are not an option just for this post. I might cover those in another post at some other point. A collation is a way of ordering something in our case the text that we are storing and the way that SQL Server can use different collations affects the query above. Let’s look at an example, it’s quite a long script so I’d suggest you copy and paste it and then read along after: –

[code language=”sql”]
SET NOCOUNT ON;
GO

IF OBJECT_ID( N’tempdb..#results’, ‘U’ ) IS NOT NULL DROP TABLE #results;
GO
CREATE TABLE #results( duration INT, run VARCHAR(30) );

— Create a test DB
USE [master];
GO
IF( SELECT DATABASEPROPERTYEX(N’TestDB’, ‘Collation’) ) IS NOT NULL DROP DATABASE TestDB;
GO

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’C:\SQLData\DataFiles\TestDB.mdf’ , SIZE = 20480KB , FILEGROWTH = 20480KB )
LOG ON
( NAME = N’TestDB_log’, FILENAME = N’C:\SQLData\LogFiles\TestDB_log.ldf’ , SIZE = 20480KB , FILEGROWTH = 20480KB )
GO

USE [TestDB];
GO

— Create and populate a relation
IF OBJECT_ID( N’dbo.LoadsOfData’, ‘U’ ) IS NOT NULL DROP TABLE dbo.LoadsOfData;
GO
CREATE TABLE dbo.LoadsOfData(
id INT IDENTITY PRIMARY KEY CLUSTERED
, filler CHAR(200) NOT NULL DEFAULT ‘a’ COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS — Notice the normal collation
);
GO

WITH n( n ) AS(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), vals( n ) AS (
SELECT ROW_NUMBER() OVER( ORDER BY n.n ) – 1
FROM n, n n1, n n2, n n3, n n4, n n5
)
INSERT INTO dbo.LoadsOfData( filler )
SELECT REPLICATE( CHAR( 48 + n % 10 ) + CHAR( 65 + n % 26 ) + CHAR( 97 + (n + 1) % 26 ) +
CHAR( 48 + (n + 3) % 10 ) + CHAR( 65 + (n + 5) % 26 ) + CHAR( 97 + (n + 7) % 26 )
, 30 )
FROM vals;
GO

— Time something that will scan
DECLARE @start DATETIME2( 7 ) = SYSDATETIME(), @a INT;

SELECT @a = COUNT(*)
FROM dbo.LoadsOfData
WHERE filler LIKE ‘%e%’;

SELECT @a = DATEDIFF( mcs, @start, SYSDATETIME() );

INSERT INTO #results( duration, run ) VALUES( @a, ‘With normal collation’ );
GO

— Change the collation
ALTER TABLE dbo.LoadsOfData ALTER COLUMN filler CHAR(200) COLLATE LATIN1_GENERAL_BIN; — Now make the column a binary collation
GO

— Try again
DECLARE @start DATETIME2( 7 ) = SYSDATETIME(), @a INT;

SELECT @a = COUNT(*)
FROM dbo.LoadsOfData
WHERE filler LIKE ‘%e%’;

SELECT @a = DATEDIFF( mcs, @start, SYSDATETIME() );

INSERT INTO #results( duration, run ) VALUES( @a, ‘With binary collation’ );
GO

— Yes the applied selects are inefficint but they take less space 🙂
SELECT duration, run
FROM #results
UNION ALL
SELECT (select duration from #results where run like ‘%normal%’) / (select duration from #results where run like ‘%binary%’), ‘Improvement multiple’;
[/code]

So lets go over what we’re doing for clarity.

  1. First we drop and then create a test database to use, always good to clean up
  2. We create a temp table to hold the results
  3. Create a table to use and populate some nonsensical test data
  4. Now we run our wildcard query and get the execution time. Note that we don’t return any results to the client so this is basically the execution as best as we can get it. We get the duration in microseconds
  5. Then we change the column to use a binary collation
  6. Now we re-run our test in the same way as before and again store the results in our temp table
  7. Finally we get the durations and a difference from the results table. I don’t care about the inefficiency here 🙂

As you can see, even with this small table there is a remarkable difference in duration and we see an improvement of about 15 times. This is better than I was expecting, I was expecting nearer 8 times which is more usual but the point is well made by the difference. Also in this case we didn’t have an index on the text column but because that is by far the majority of the data anyway it wouldn’t make that much difference.

How much data does a differential backup need to backup

I recently wanted to know how much of my database a differential backup would actually need to read and backup. I soon had an idea to scan the Differential Change Map (DCM) pages in the data files. To understand what these pages track lets have a quick review of how SQL Server stores data.

How does SQL Server store data?

SQL Server stores data for each entity in database pages, pages are 8KB chunks of data in the data files. These pages are organised into 64KB extents, or 8 contiguous pages. SQL Server also stores allocation pages that track the usage of pages/extents within the data files. One of these allocation pages is the DCM which tracks whether or not any part of any page in an extent has been modified since the last full backup. So if I had a data page on the 30th page in my database and this was amended since the last full backup then the relevant DCM page would show that the 4th extent had been modified since the last full backup.

The theory

The DCM pages track changes to extents since the last full backup. This allows SQL Server to determine, very efficiently, which extents need to be backed up in the next differential backup. The first DCM page is the 7th page (zero based so page 6) in the database. The below command can be used to view the page: –

[code language=”sql”]
DBCC PAGE( myDatabase, 1, 6, 3 ) WITH TABLERESULTS, NO_INFOMSGS;
GO
[/code]

Also DCM pages are repeated every 511,232 pages in the database. Given this information we can use the below pseudo code as a basis for developing our script: –

  1. Get a list of all the data files (Exclude log files) in our database
  2. Iterate over all the data files to load all the DCM pages
    1. Follow all the DCM pages in each file
    2. Read all the data in those DCM pages
    3. Save the relevant data into a temp table
  3. Using the result table work out how many pages in each DCM interval, the section of a data file that the DCM relates to, will be backed up
  4. Sum the results
  5. Generate a summary result set

The results

I have a sample output in a couple of tables below as a summary of a scan against a small test database. I have previously run a very similar script successfully against a multi-terabyte database. Every DCM page represents just under 4GB of data in your database so even at 4TB you would have only about 1,000 DCM pages to read. There is obviously a small amount of overhead for the temp tables and to get the file list. Thereafter for every DCM page there will be 14 page reads. So as you can see this is a pretty light-weight script and highly efficient.

Summary result set

total_pagesaltered_pagespercent_changed
2816273697.16

Detailed output, this shows one result line per contiguous set of extents where at least one record in that extent has been modified

file_idstart_pageend_pagealtered_pages
1027362736

The script

In order to use this script please change the database name on line 60. Then run the script. I would recommend running against a test or dev instance initially. Lastly in preference to running against a production instance I personally run this against a log-shipped database. Please let me know if this script proves useful or if you have any issues.

DCM_scanner