Compression in Oracle – part 5: Costs of Index Compression

In the previous article in this series we looked at index compression, and the way in which Oracle stored index data for a compressed index, and we discovered that Oracle keeps two row directories the “main” directory and the “prefix” directory with slightly different structures even though there is still only a single “row heap”. We also examined the way

In the previous article in this series we looked at index compression, and the way in which Oracle stored index data for a compressed index, and we discovered that Oracle keeps two row directories the “main” directory and the “prefix” directory with slightly different structures even though there is still only a single “row heap”. We also examined the way that Oracle maintains the row directories in real-time to give the impression that the entries in the row heap are complete and sorted.

In this article we’ll be looking at the likely changes in storage space, workload and performance due to the increased complexity of the internal storage algorithms.

Space Savings

The first question, and in some respects the easiest to address, is how large and how repetitious does a prefix have to be before it’s worth compressing an index. The answer depends on how well you’ve designed your indexing strategy but, surprisingly, even a prefix which is just a single byte is worth compressing if there are typically more than just four rows per prefix. To demonstrate this, I’m going to build a table of 1,024 rows with a two-column index where the first column is a single byte and each distinct value for the first column has four rows in the table.

Bear in mind that this is a totally artificial example that I’m creating just to give you an idea of how the numbers pan out. At 1,024 rows it’s probably not even worth spending any effort (in the real world) trying to decide whether an index should be compressed – any costs or benefits are likely to be invisible to the human eye.

The number of rows has to be 1,024, by the way, because there are only 256 possible values for a single byte column, and I want to show that an average of 4 rows per byte is the break-even point. My sample script looks like this:

Once I’ve created the data I can use the validate command to check how much space the index is using, and how that space is taken up – this isn’t a good idea on a production system, of course, since the call to validate an index will lock the table for the duration. Before I show any results I’ll give a theoretical justification for the “four rows per prefix.”

If I have a single byte column as the first column of the index then it takes two bytes – the byte itself and the length byte that says it’s one byte long. If I compress the index on the first column I create a prefix entry – which will be a single column row taking 4 bytes (flag byte, lock byte, one byte for the content, one byte for the length); but I also need a new ‘prefix directory’ entry which also take 4 bytes – the offset into the heap which takes 2 bytes and the prefix usage count which take 2 bytes. So my prefix requires a total of 8 bytes, and saves 2 bytes each time is it used – hence I have to use it 4 times to break even.

Having said that, here are my results:

As you can see, apart from a few bytes relating to the variation in the degree to which Oracle has had some luck trimming the branch rows, the compressed and uncompressed indexes need the same amount of space – any more rows per prefix and you’re past the break point and saving space. Compression on even a single byte prefix is likely to save you space; and the longer the prefix the smaller the number of repetitions you need before compression saves space.

If you want to use the validate command to check how much benefit you’re likely to get from compressing an index, remember that you need to rebuild the index twice, one without compression and once with, so that you are comparing like with like (if you don’t do this, the error is likely to be small, but if you’re going to do it you might as well do it as fairly as possible). Given the locking issues with the validate command, you should only do this type of test on a backup system.

It’s all very well reducing the size of indexes, but how might this affect performance. In principle you could argue that a smaller index requires fewer buffers to stay cached, so you may get some performance benefit on important indexes because you are less likely to need to do disc I/Os while using them. Countering this argument you have the potential for increasing the level of contention on leaf blocks as you modify them and the number of consistent read clones as you re-read them because compression means more entries per leaf block which, statistically, means more people are likely to want the same block at the same time.

Selects

The other part of performance comes from the actual amount of work done by a single session using an index. Where, and how, does using a compressed index rather than a non-compressed index introduce extra work. We’ll start with the work done for selecting rows by key, but I should warn you that the details of processes I describe below are my best guess, based on examination of the effects, not on any type of analysis of code or traced calls.

Imagine Oracle has worked through the root and branch blocks of an index to get to the leaf block that contains the index entry for the row you want, and assume (for simplicity of arithmetic) that we have 256 entries in the index. I believe (but have no proof, beyond a memory of a conversation I had about 20 years ago) that Oracle uses a binary chop to search for the entry until there are only two or three indexes entries left to chop and then it switches to a linear scan. So let’s apply this strategy, bearing in mind that we have to access the row directory to get to the row heap, to an uncompressed version of our index leaf block:

  • We visit row directory entry 128 and jump to the row heap. Assume the value we find is too big
  • We visit row directory entry 64 and jump to the row heap. Assume the value we find is too small.
  • We visit row directory entry 96 (half way between 64 and 128) and jump to the row heap.
  • From them on we might visit – entry 80, 72, 76, and then walk 77, 78, 79.

Roughly speaking, we can expect to jump from the row directory to the row heap log (2,N) times (where N is the number of entries in the row directory) to find the row we want.

Now assume we have compressed the index – the steps that occur will vary with the number of prefixes and the distribution of index entries across those prefixes, but let’s assume a boring (but simple) 16 prefixes with 16 index entries each (and ignore the fact that compression gives us more index entries in the block).

Oracle can start by applying its binary chop mechanism to the prefix directory – which means that it will take roughly 4 (viz: log(2,16)) steps to find the best matching prefix. At this point Oracle knows the starting point and number of index entries (16 in our case) that there are in the main row directory for this prefix, so it can apply its binary chop method to that section of the row directory, taking a further log(2,16) steps to find the match. Total work – pretty much the same as for the uncompressed index.

For a range scan, you have to walk through the row directory one entry at a time. But if you’ve compressed the index, every time you get to the end of the list of row directories for one prefix you have to jump to the prefix directory to find out the next prefix value and count, and then jump back to the row directory to continue the scan – so there is a percentage overhead there that might be more noticeable if you have a very large number of prefixes with a very small number of rows each and your typical range scan passes through several prefixes. In general, though, when I’ve identified indexes that look like good candidates for compression I haven’t noticed any significant increase in CPU when querying them.

There is a little trap to be careful of when you compress indexes, though. You do expect to reduce the size of the index – which means the number of leaf blocks – and the number of leaf blocks is a component of cost calculations used by the optimizer. So when you compress an index it is just possible that a query that was using some other index may suddenly see that the compressed index has become a little less costly and change its execution plan to use it. In principle, if the cost is lower then the plan should be a better plan; in practice, of course, we have all seen cases where a lower cost plan does a lot more work and takes a lot more time.

Inserts and deletes

As we’ve seen in the previous article, Oracle maintains the row directory in an index block so that a walk down the row directory returns the row heap in the correct order. You will appreciate, therefore, that inserting an entry into an index requires Oracle to maintain the row directory with some care.

In a non-compressed index the operation for an insert is (approximately):

  • Pin the buffer exclusively
  • add index entry to top of heap,
  • search (binary chop) the row directory to find the correct insertion point,
  • push the remainder of the row directory down one place
  • insert pointer to new heap entry in the correct location.
  • Unpin the buffer

This becomes more complex for compressed indexes since you now have two directories – and don’t forget that the prefix directory is stored in the block immediately after the main row directory. Assuming you want to insert a row with a prefix that is already in use the steps are as follows (possibly in a slightly different order):

  • Pin the buffer exclusively
  • Find the correct prefix pointer (binary chop?)
  • Increment the prefix usage counter
  • Search (binary chop) the main row directory to find correct insertion point
  • Push the remainder of the row directory, and the prefix directory down
  • insert pointer to new heap entry in the correct location.
  • Unpin the buffer

It’s a little more complicated if the prefix is not already in use in the block because when you get to the point where the matching prefix ought to be and find it’s not there you have to push down the remainder of the prefix directory to insert the new prefix (with a usage counter of 1) at that point.

I won’t recite the details for deletes – you can visualize them as a reversal of the insert, again with two variations, the delete that reduces the count on a prefix and the delete that eliminates the need for a specific prefix – though perhaps a prefix with a zero usage count is allowed to persist in case of rollbacks.

Clearly, then, there are two potential threats on inserts and deletes in an index (and remember that an update is a delete followed by an insert): firstly, the extra CPU because you have to do a little more work to maintain two parts of the leaf block; secondly the extra time that your session has to hold the block pinned in exclusive mode while it makes the changes. A further irony of compression, of course, is that the better the compression the more rows per leaf block, so the longer the row directory, and the longer it takes to push down the bit that has to move.

A possible side effect of the extra pinning time is that you may see an increase in buffer busy waits (other sessions waiting for you to release your pin) and further CPU being lost on latch activity as other sessions spin competing on the cache buffers chains latches for those popular blocks).

Compressing indexes puts more data into a smaller space, and makes hot spots hotter – if you run into this problem you may have to make the trade off between compression and contention. There is a fairly standard workaround to the generic hot-spot problem, though, that may apply in some cases when compression is enabled; if you’ve already paid for the partitioning option hot spots in indexes can often be dispersed if you can globally hash partition your indexes on the prefix columns into a small (power of 2) set of partitions.

Summary

As far as space saving is concerned, compression works perfectly effectively even for short prefixes – although the shorter the prefix the larger the number of repetitions per prefix has to be before you produce a saving that could really make a difference. If you try to compare space usage before and after compression on a production index, remember that B-tree indexes tend to spread to something like 70% (average) leaf block space usage, so the fair comparison requires you to rebuild the index twice, once compressed, once uncompressed.

Although index compression can be a very effective space saver which may, as a side effect, allow better use of the buffer cache and reduce disk I/Os, it may cause an increase in CPU on inserts, updates and deletes (though probably not a noticeable effect on selects). Symptoms that may appear if you compress an index that is subject to very frequent, concurrent, changes include: increases in CPU for “no apparent reason”, increases in buffer busy waits, and increases in latch contention. It’s almost always worth looking at index compression for any reasonable large index with at least a handful of rows per prefix, but do monitor CPU usage on SQL that makes use of the index before and after compressing it.

There is a catalogue of all five items in this series (and a few others) at this URL