12c Histogram: TOP-Frequency

As of Oracle version 12.1, if a number of buckets N in a histogram is less than the number of distinct values in the data set, Oracle will not generate the instable and imprecise legacy Height Balanced histogram. Instead, two new types of histogram are possible: TOP-Frequency and Hybrid. In order to select between the later and the former type

As of Oracle version 12.1, if a number of buckets N in a histogram is less than the number of distinct values in the data set, Oracle will not generate the instable and imprecise legacy Height Balanced histogram. Instead, two new types of histogram are possible: TOP-Frequency and Hybrid. In order to select between the later and the former type of histogram, Oracle first proceeds to a determination step. It examines how representative the TOP-N distinct values are relative to all values in the data set. If the TOP-N values accounts for more than a certain threshold, a TOP-Frequency is used. When the TOP-N values fail to satisfy the threshold condition, a Hybrid histogram is calculated. This first article investigates the TOP-Frequency histogram. It shows how Oracle determines the TOP-Frequency threshold, examines how the low and high values of the data set influence this threshold value and outlines the arithmetic used by Oracle to work out the cardinality estimates of both popular and non-popular TOP-Frequency histogram.

TOP-Frequency: pre-requisites

Here’s below the simple model we are going to use in order to illustrate the TOP-Frequency histogram:

The above data set contains 200 rows, of which the n2 column has 9 distinct values as shown below:

In order to qualify for a TOP-Frequency histogram, the following three conditions must be satisfied:

  • The number of buckets used to gather the histogram must be less than the number of distinct values in the data set
  • The sampling percentage is the default one : AUTO_SAMPLE_SIZE
  • The top most frequent distinct values must exceed a certain threshold (explained later in this article)

The two first conditions are quite obvious, so let’s dig into some details for the third prerequisite.

Top Frequency: Inflexion point

In the following example, we are going to ask Oracle to gather statistics for the dataset above using fewer buckets (8) than the available number of distinct values (9). The AUTO_SAMPLE_SIZE default sampling will be used to ensure we don’t generate a Height Balance histogram. In order to check what Oracle is doing behind the scenes, we are going to trace the call to the dbms_stats package (under 12.1.0.2) as shown in the following:

The ‘TRACE’ parameter in the first call above will trace any call to dbms_stats package by sending the corresponding trace file directly to the dbms_outpout current session. Here’s a snippet of this trace file:

The above trace file is clearly showing (start processing top n values for column "N2") that, before attempting to collect a Hybrid histogram, Oracle did internal arithmetic in order to process the TOP-8 distinct values for column N2 to see how representative they are when compared to the rest of the remaining values. That is, Oracle has tried to work out the proportion of unpopular values that are statistically insignificant when compared to very few distinct popular ones. If popular values dominate the dataset then Oracle will generate a TOP-Frequency histogram for the TOP-8 values and ignore the remaining insignificant values. The above trace file shows that Oracle has indeed succeeded to pass the threshold prerequisite (Mark column "N2" as top N computed) and produced a TOP-Frequency histogram as shown below:

In fact, I have been playing with the above call to dbms_stats package changing the number of buckets several times until I succeeded in producing a TOP-FREQUENCY histogram with 8 buckets. Nevertheless, as specified in Anju Garg’s article, it is possible to derive the number of buckets upon which Oracle will generate a TOP-FREQUENCY histogram based on the following inflexion point (or threshold in Oracle words) where N represents the number of specified buckets:

In other words, to qualify for a TOP-FREQUENCY histogram, the TOP-8 N2 distinct values should occupy more than 175 rows in the data set i.e.

And, as shown in Anju Garg‘s article, the TOP-8 N2 can be computed using the following SQL:

The TOP-8 N2 values occupy 198 rows which is more than the inflexion row count threshold of 175 rows. This is why the current configuration qualifies for a TOP-FREQUENCY.

Top Frequency: Inflexion point and low-high values

Whatever the type of histogram Oracle has to generate, the low and high values must be in the histogram information. When these two extreme values are naturally present in the TOP-N distinct values then the above threshold formula remains correct. However, when they are insignificant and negligible they will be forced into the histogram information. This value forcing is possible to the detriment of excluding a victim value from the actual TOP-N distinct values. The victim values are those with the less popular distinct value.

Let’s see whether the low and high values are among the TOP-8 N2 distinct values:

In the previous example, the highest value (125) is not among the TOP-8 N2 distinct values. However, as explained above, this extreme value has been forced in the histogram information as shown in the following:

As you can see the insignificant high value 125 has been forced into the histogram table with a hardcoded frequency of 1. This forcing has been possible at the cost of a value exclusion from the TOP-8 naturally dominant values. And, as mentioned above, the victim value in this case is 22 representing the first value with the least popular frequency (3).

Notice that the above dbms_stats trace file also indicates that Oracle has proceeded to a remove-forcing operation via the following line:

Although the trace file is not very clear, it nevertheless indicates that something is removed to let a room for the high value 125 (Typ=2 Len=3: c2,2,1a):

Whenever Oracle operates such an exclusion-forcing value into the histogram table, the TOP-N count of rows should be adjusted before to be compared with the threshold as shown in the following:

It’s worth noting that, in this particular case, both TopNRows and AdjustedTopNRows are greater than the threshold of 175 rows. However, whenever Oracle generates a Hybrid histogram where you think that your TOP-N setup qualify for the a TOP-FREQUENCY, you must ensure that you are comparing the correct TopNRows with the threshold number of rows.

2. Top Frequency: cardinality estimation

2.1 Popular values

The select statement exposed below shows the resulting metadata produced for captured popular N2 column values when it qualifies for TOP-FREQUENCY histogram due to the appropriate number of buckets (8):

For the above captured popular values, the Optimizer uses the following formula to calculate its cardinality estimates:

Applied to N2 = 42, the above estimation formula gives this:

This is of course backed by the following execution plan where we can see that Oracle has come up with exactly the same cardinality estimation of 106:

However, the exclusion-forcing value operation done by Oracle in order to insert the high value into the histogram table makes both the forced value (125) and the excluded value (22) not following the same cardinality estimation formula. The next section discusses this particular case.

2.2 Non-Popular values

Since Oracle concentrated on capturing the most representative 8 N2 values over the available 9 distinct ones, then the following least occurring distinct values have not been captured and have been consequently considered to be non-popular:

Let’s now see what cardinality estimate Oracle will come up with for this non-popular non-captured value:

I have to restrain the enthusiasm of those using the ‘‘half least popular value” rule to get the selectivity of a non-popular frequency histogram before they extend it to the current non-popular top-frequency using the following formula:

The best way to reverse engineer the path used by Oracle to get its cardinality estimates is to use the corresponding 10053 trace file, of which a snippet is shown below:

This tends to suggest that Oracle is simply using the following formula to compute the selectivity of non-popular (insignificant) TOP-Frequency values:

The NewDensity, when there is no extreme value forcing, is calculated using the following formula:

And the formula exposed below when Oracle has forced an extreme value into the histogram information:

Finally, using this computed NewDensity and applying it in the above mentioned cardinality estimation formula for a non-popular TOP-Frequency gives the following:

There is however an odd situation happening here. Oracle is considering the forced high value 125 to be non-popular and is applying the NewDensity to get its selectivity as shown below:

Oracle is certainly flagging somewhere in its internal code that the high value present in the histogram has been forced, which allows the Optimizer to recognize it and use the ”appropriate” cardinality formula.

Conclusion

Prior to Oracle 12c, when histograms are collected for a data set using fewer buckets than the number of distinct values in the data set, Oracle will systematically generate an instable and imprecise Height Balanced histogram where at most 127 (254/2) values can be captured. With 12c under default value of AUTO_SAMPLE_SIZE, Oracle will not only cease to generate Height Balanced histogram, but will also carry out a preliminary determination step to see how dominant the TOP-N (N is the number of buckets) values are with regards to the total number of rows in the data set. If these TOP-N values are beyond the inflexion point then a TOP-FREQUENCY histogram is calculated for the dominant values while the neglected ones are considered to be hybrid non-popular values. This new type of histogram really helps the optimizer to come up with highly-accurate cardinality estimation, a fundamental prerequisite for optimal execution plans. In the next article we will examine the 12c Hybird histogram.