Tag Archives: Architecture

Interesting issue with heaps and page free space

First of all I would like to thank Uwe Ricken who alerted me to this point at the recent SQL Saturday in Manchester. I would also like to thank the organisers and sponsors of SQL Saturday, especially our local user group leader Chris Testa-O’Neill.

I recently blogged, to a small extent, about allocation pages. One of the allocation pages I blogged about was the Page Free Space (Or PFS for short) page. This tracks more than just the free space on a page. In fact it only tracks the free space in heap pages. It also tracks whether or not a page is an IAM page (Also briefly mentioned in my blog post about allocation pages) and a few other things. Please see this blog post by Paul Randal of SQL Skills for a fuller description of PFS pages.

Now the main point I want to call out is that the PFS page isn’t very granular in it’s ability to track page fullness. It tracks where in the following range of percent fullness a page is: –

  • empty
  • 1 to 50% full
  • 51 to 80% full
  • 81 to 95% full
  • 96 to 100% full

It might be clear from these values that large, wide, records in a heap could cause problems. However if not please allow me to demonstrate with a small demo followed by an explanation. Try the below script in SSMS: –

[code language=”sql”]
— Use tempdb for simplicity
USE [tempdb];
GO

— Drop test object we intend to use and then create a test table to use
IF OBJECT_ID( N’dbo.demo’, ‘U’ ) IS NOT NULL DROP TABLE dbo.demo;
GO

CREATE TABLE dbo.demo(
data CHAR( 2000 )
);
GO

— We insert 4 values in one transaction
INSERT INTO dbo.demo( data )
VALUES( ‘a’ )
, ( ‘b’ )
, ( ‘c’ )
, ( ‘d’ );
GO

— Lets check the number of data pages (Should be 1 as these 4 rows fit)
SELECT allocated_page_file_id
, allocated_page_page_id
FROM sys.dm_db_database_page_allocations(
DB_ID( N’tempdb’ )
, OBJECT_ID( N’dbo.demo’, ‘U’ )
, 0
, 1
, ‘DETAILED’
)
WHERE is_allocated = 1
AND page_type = 1 — data page;
GO

— Now lets try that again with 4 separate transactions
TRUNCATE TABLE dbo.demo;
GO

INSERT INTO dbo.demo( data ) VALUES( ‘a’ );
INSERT INTO dbo.demo( data ) VALUES( ‘b’ );
INSERT INTO dbo.demo( data ) VALUES( ‘c’ );
INSERT INTO dbo.demo( data ) VALUES( ‘d’ );
GO

— Well isn’t that odd we now have 2 data pages
SELECT allocated_page_file_id
, allocated_page_page_id
FROM sys.dm_db_database_page_allocations(
DB_ID( N’tempdb’ )
, OBJECT_ID( N’dbo.demo’, ‘U’ )
, 0
, 1
, ‘DETAILED’
)
WHERE is_allocated = 1
AND page_type = 1 — data page;
GO
[/code]

Explanation of the issue: –

In a heap SQL Server uses the PFS page to track the amount of free space in a page. It does not read each individual page in the heap (Which would make large heaps unusable). When you add records in a single transaction (Like the first insert in the script) SQL Server will add the records until they no longer fit on the page and will then allocate another page and keep adding records there. However with individual transactions (Like the second set of inserts) the PFS page is checked each time we attempt to insert a record. Since the 4 records fit on a single page when we add them in a single transaction SQL Server fills the page (Well near enough – there’s actually 60 bytes left on the page with the records that I created) so we know that they should fit. However because we check the PFS page at the start of each insert for the second part of the demo with individual transactions the inserts go something like this: –

Insert 1: No data pages allocated so allocate a data page and insert 1 record.
Insert 2: Check the PFS page, our page is 25% full so shows as 50% fill (Due to the lack of granularity in how we track free space) we therefore insert the record on the same page.
Insert 3: Check the PFS page, our page is about 50% full and shows as 50% full, we therefore insert the record on the same page.
Insert 4: Check the PFS page, our page is about 75% full and shows as 80% full. The PFS page shows that there is not enough space for a row of 25% the size of the page. We therefore allocate a new page and insert the record there.

The consequence of this optimisation is that many heap pages for tables with wide rows are likely to have quite a bit of free space. Please note that this is not the case with a clustered index (Or a non clustered index but I’m just looking at table data to draw your attention to this) because in an index the amount of free space on a page is attained from the actual data pages and not the PFS pages.

To wrap up I’d like to point out that this issue can be resolved with the below command, however please be aware that this, similarly to rebuilding a clustered index, will force an update of all non-clustered indexes on the table, this command has worked since at least 2008 R2: –

[code language=”sql”]
ALTER TABLE dbo.demo REBUILD;
GO
[/code]

Collations and text scan efficiency

SQL anti-patterns are an often misunderstood concept. One very basic anti-pattern is a leading wildcard character in a textual search. It can have really bad effects on a large table and yet there are always instances of it becoming a requirement. You see this all too often especially when reporting style queries are run on a transactional database. Lets say that for some reason we wanted to find all of our customers whose surnames end in “ith” like Smith for instance. We might write a query like the one below: –

[code language=”sql”]
SELECT FirstName, LastName, DOB
FROM dbo.table1
WHERE col1 LIKE LastName ‘%ith’;
[/code]

Even if we have an index on LastName the best we can do is to scan the index. An index in SQL Server, much like an index in a phone directory, is of little help to us here. If you were to try and find every surname in a phone book that ended with “ith” you’d have to read every entry in the index. Ok that’s still better than looking at every page in the phonebook. Although our query requests 3 columns which may not be in the index and therefore the lookups to the actual data may actually make the index scan and lookup slower than a scan of the entire data.

Anyway onto something quite trivial that can speed up such a query. If the term collation in this posts title jumped out at you as a potential performance improvement then you’re correct. Of course we could also use a full text index but that carries overhead and all sorts of other joys. Lets assume that either this isn’t a common query or you’re low on disk space and full text indexes are not an option just for this post. I might cover those in another post at some other point. A collation is a way of ordering something in our case the text that we are storing and the way that SQL Server can use different collations affects the query above. Let’s look at an example, it’s quite a long script so I’d suggest you copy and paste it and then read along after: –

[code language=”sql”]
SET NOCOUNT ON;
GO

IF OBJECT_ID( N’tempdb..#results’, ‘U’ ) IS NOT NULL DROP TABLE #results;
GO
CREATE TABLE #results( duration INT, run VARCHAR(30) );

— Create a test DB
USE [master];
GO
IF( SELECT DATABASEPROPERTYEX(N’TestDB’, ‘Collation’) ) IS NOT NULL DROP DATABASE TestDB;
GO

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’C:\SQLData\DataFiles\TestDB.mdf’ , SIZE = 20480KB , FILEGROWTH = 20480KB )
LOG ON
( NAME = N’TestDB_log’, FILENAME = N’C:\SQLData\LogFiles\TestDB_log.ldf’ , SIZE = 20480KB , FILEGROWTH = 20480KB )
GO

USE [TestDB];
GO

— Create and populate a relation
IF OBJECT_ID( N’dbo.LoadsOfData’, ‘U’ ) IS NOT NULL DROP TABLE dbo.LoadsOfData;
GO
CREATE TABLE dbo.LoadsOfData(
id INT IDENTITY PRIMARY KEY CLUSTERED
, filler CHAR(200) NOT NULL DEFAULT ‘a’ COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS — Notice the normal collation
);
GO

WITH n( n ) AS(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), vals( n ) AS (
SELECT ROW_NUMBER() OVER( ORDER BY n.n ) – 1
FROM n, n n1, n n2, n n3, n n4, n n5
)
INSERT INTO dbo.LoadsOfData( filler )
SELECT REPLICATE( CHAR( 48 + n % 10 ) + CHAR( 65 + n % 26 ) + CHAR( 97 + (n + 1) % 26 ) +
CHAR( 48 + (n + 3) % 10 ) + CHAR( 65 + (n + 5) % 26 ) + CHAR( 97 + (n + 7) % 26 )
, 30 )
FROM vals;
GO

— Time something that will scan
DECLARE @start DATETIME2( 7 ) = SYSDATETIME(), @a INT;

SELECT @a = COUNT(*)
FROM dbo.LoadsOfData
WHERE filler LIKE ‘%e%’;

SELECT @a = DATEDIFF( mcs, @start, SYSDATETIME() );

INSERT INTO #results( duration, run ) VALUES( @a, ‘With normal collation’ );
GO

— Change the collation
ALTER TABLE dbo.LoadsOfData ALTER COLUMN filler CHAR(200) COLLATE LATIN1_GENERAL_BIN; — Now make the column a binary collation
GO

— Try again
DECLARE @start DATETIME2( 7 ) = SYSDATETIME(), @a INT;

SELECT @a = COUNT(*)
FROM dbo.LoadsOfData
WHERE filler LIKE ‘%e%’;

SELECT @a = DATEDIFF( mcs, @start, SYSDATETIME() );

INSERT INTO #results( duration, run ) VALUES( @a, ‘With binary collation’ );
GO

— Yes the applied selects are inefficint but they take less space 🙂
SELECT duration, run
FROM #results
UNION ALL
SELECT (select duration from #results where run like ‘%normal%’) / (select duration from #results where run like ‘%binary%’), ‘Improvement multiple’;
[/code]

So lets go over what we’re doing for clarity.

  1. First we drop and then create a test database to use, always good to clean up
  2. We create a temp table to hold the results
  3. Create a table to use and populate some nonsensical test data
  4. Now we run our wildcard query and get the execution time. Note that we don’t return any results to the client so this is basically the execution as best as we can get it. We get the duration in microseconds
  5. Then we change the column to use a binary collation
  6. Now we re-run our test in the same way as before and again store the results in our temp table
  7. Finally we get the durations and a difference from the results table. I don’t care about the inefficiency here 🙂

As you can see, even with this small table there is a remarkable difference in duration and we see an improvement of about 15 times. This is better than I was expecting, I was expecting nearer 8 times which is more usual but the point is well made by the difference. Also in this case we didn’t have an index on the text column but because that is by far the majority of the data anyway it wouldn’t make that much difference.

Index Depth for Partitioned Tables

One really useful function that Microsoft provide with SQL Server is the INDEXPROPERTY function. With it you can get the depth of an index, which is the number of pages in the index that need to be read to get from the root to the leaf. Assuming a table called dbo.test with an index pk_test you could use a query like the one below: –

[code language=”sql”]
SELECT INDEXPROPERTY(OBJECT_ID(N’dbo.test’, N’U’), N’pk_test’, ‘IndexDepth’);
[/code]

The problem with this in-built function is that it doesn’t work on partitioned tables, yielding a not so helpful NULL as a result. Now of course there is a way to get the index depth for each balanced tree structure of each partition with sys.dm_db_index_physical_stats. My issue here is that I don’t want that much information and I want to know how much load I will be putting on the server. Load here being measure in terms of reads and by extension buffer pool usage. So I rolled my own script to look up the index depth of all partitions of an index. We’ll get to that in a moment, first I thought I’d explain why I cared.

I was looking at some indexes that someone else had created with the DTA (Database Tuning Advisor). A very nice feature that works quite well when used properly. Not so great when someone feels that a scatter gun tactic of throwing all recommended indexes at a database would be clever. One place where the DTA falls down is that it doesn’t know what you were trying to achieve with your design. As such the DTA may well recommend a great index that breaks something else for you. For example with partitioned tables you can lose the ability to fast switch if you have non-aligned indexes. Here’s an example of something similar to what I had. Lets assume you have 100 partitions each of depth 3 (Which we don’t yet know but my script can help you there). You have a primary key which is a nice surrogate integer identity, often recommended but not always advisable. The partitioning key is a datetime column. Finally you also have a non partition aligned index on the primary key alone, depth 4. The clustering key is on the identity however in practice you have to be in the right partition so effectively it might as well be datetime, id instead.

Now if I wanted to check the existence of a record by looking up the ID, for example, then the optimiser would in this situation correctly conclude that the non-aligned index is ideal. 4 reads and we’re done. If we used the clustering key we’d need to access each partition and check each one because we cannot get the datetime field first (we only have the ID). Well that’s going to be 300 reads. 3 reads per partition. Yep them 4 reads sure are looking good right about now. Now the downside here was that this non-aligned index would prevent me fast switching partitions. So yes it was ideal for the data access but it was really bad for my data purge. Picture a very upset DBA about this time 🙂

I now had some work to do to sort this out, but before I did that I came across the annoyance with INDEXPROPERTY so I made a stored proc to help. The proc can be created in any DB and can then be pointed at any other DB, so I recommend putting it in a Tools or DBA database if you choose to use it. By the way it uses dynamic SQL. I really like dynamic SQL for it’s power and flexibility. In the right hands dynamic SQL is a really powerful tool, but only in the right places. I’ve seen been some really bad performance issues in the past where dynamic SQL was the right way to go and no other way would have been nearly as good. In this case I needed dynamic SQL to easily let me construct the query to get the root pages of each partition. I also needed dynamic SQL to capture the root page data with DBCC PAGE. The proc will check for the existence of the database, which should be the only part that might allow SQL injection, and bails out if the database cannot be found. Also there is error handling for this and anything deemed serious will be logged. Otherwise you should get a result set. This script should work in SQL Server 2005+ although I cannot test it on all versions. Please let me know if you use it and find it useful or find any errors.

Get index depth for partitioned tables