The Many Meanings of “Quartile”

Some time ago I discussed various issues pertaining to the concept of median in statistics. The same issues, and more, affect the concept of quartile (the median being the second quartile), so much so that different statistical software packages produce many different answers for quartiles. I have seen this affect students, who are taught one method for doing it by hand, but then find that their software uses a different definition. Let’s look at some questions in this area.

Dueling software

First, from 2002:

Defining Quartiles

We have a project for statistics class where we have to collect a set of data, then find the mean, median, mode, range, upper quartile, lower quartile, interquartile range, and standard deviation. We also have to plot the data in a stem-and-leaf plot, dot plot, histogram, and box-and-whisker plot.

I decided to collect data on the heights of the players on our soccer team, and got the following data:

{70", 71", 71", 71", 72", 73", 74", 74", 74", 74", 75", 75", 77", 77", 77", 82"}

I didn't have any problems until I was checking my work using my calculator and a computer. All the values agreed with my hand calculations except the upper quartile, lower quartile, and interquartile range.

When I calculated the quartiles and IQR following the textbook, I got 77" (UQ), 71" (LQ) and 6" (IQR). But when I plugged the values into my calculator (a TI-83), it gave the upper quartile as 76", the lower quartile as 71.5", and the IQR as 4.5". I then tried making an Excel spreadsheet and it gave the upper quartile as 75.5", the lower quartile as 71.75", and the IQR is 3.75". Then I went to the computer lab at school and tried using Minitab. That program gave the upper quartile as 76.5", the lower quartile as 71.25", and the IQR as 5.25".  If they just disagreed with my calculations, I'd figure that I made a mistake, or there's some sort of rounding going on, since we're told to take the nearest data point and these programs obviously don't. But they don't even agree with each other. They can't all be right! What's going on?

Please help clear up this mystery.

Doctor Twe gave a long, detailed answer that has become a classic; in fact, while searching for further information, I later discovered an excellent article (which will be referred to below) that referred to this answer. I will quote selected parts of what he wrote.

Quartiles are simple in concept but can be complicated in execution.

The concept of quartiles is that you arrange the data in ascending order and divide it into four roughly equal parts. The upper quartile is the part containing the highest data values, the upper middle quartile is the part containing the next-highest data values, the lower quartile is the part containing the lowest data values, while the lower middle quartile is the part containing the next-lowest data values.

Here's where it starts to get confusing. The terms 'quartile', 'upper quartile' and 'lower quartile' each have two meanings. One definition refers to the subset of all data values in each of those parts. For example, if I say "my score was in the upper quartile on that math test", I mean that my score was one of the values in the upper quartile subset (i.e. the top 25% of all scores on that test). 

But the terms can also refer to cut-off values between the subsets. The 'upper quartile' (sometimes labeled Q3 or UQ) can refer to a cut-off value between the upper quartile subset and the upper middle quartile subset. Similarly, the 'lower quartile' (sometimes labeled Q1 or LQ) can refer to a cut-off value between the lower quartile subset and the lower middle quartile subset. 

The term 'quartiles' is sometimes used to collectively refer to these values plus the median (which is the cut-off value between the upper middle quartile subset and the lower middle quartile subset). John Tukey, the statistician who invented the box-and-whisker plot, referred to these cut-off values as 'hinges' to avoid confusion. Unfortunately, not everyone followed his lead on that.

I’ll be saying some similar things soon in discussing percentiles, which likewise have multiple meanings. But this is not what Tom was concerned about.

Five methods

It gets worse. Statisticians don't agree on whether the quartile values ('hinges') should be points from the data set itself, or whether they can fall between the points (as the median can when there are an even number of data points). Furthermore, if the quartile value is not required to be a point in the data set itself, most data sets don't have a unique set of values {Q1, Q2, Q3} that divides the data into four "roughly equal" portions. The SAS statistical software package, for example, allows you to choose from among five different methods for calculating the quartile values. How then do we choose the "best" value for the quartiles?

The answer to that question depends in part on the statisticians' objective in finding quartile values. Tukey wanted a method that was simple to use, "without the aid of calculating machinery." Others seek to minimize the bias in selecting the quartile values. Still others want methods that can be extended to other quantiles (for example, quintiles or percentiles). Thus, different methods have been developed for calculating the quartile values.

This is the underlying reason for the differences: Different contexts or goals lead to different choices. Unfortunately, for students, the context is just whatever they are taught, and the goal is only to satisfy their teacher! Doctor Twe first described two specific methods, from Tukey and the TI-83 (based on a textbook called Moore and McCabe), both of which are based on dividing the data into four parts and including or excluding boundaries. He summarizes:

Those methods involve only simple arithmetic and are easily extendable to octiles (eighths), hexadeciles (sixteenths), etc. They are not, however, extendable to quintiles (fifths) or percentiles (hundredths), etc. Furthermore, they tend to have a high bias. (That is, the quartile values calculated on subsets of the data set tend to vary more, and are not good predictors of the quartile values of the entire data set.)

Then he discussed three methods based on formulas for the index at which to find a “hinge”, used by Mendenhall and Sincich, Minitab, and Excel.

As we can see, these methods sometimes (but not always) produce the same results. To further illustrate, consider the following data sets:

     A = {1, 2, 3, 4, 5, 6, 7, 8}
     B = {1, 2, 3, 4, 5, 6, 7, 8, 9}
     C = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
     D = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}

Here are the upper and lower quartile values, as calculated by each method described above:

              Tukey  M&M  M&S  Mini  Excel
              -----  ---  ---  ----  -----
   Set A LQ:   2.5   2.5   2   2.25   2.75
         UQ:   6.5   6.5   7   6.75   6.25

   Set B LQ:   3.0   2.5   3   2.50   3.00
         UQ:   7.0   7.5   7   7.50   7.00

   Set C LQ:   3.0   3.0   3   2.75   3.25
         UQ:   8.0   8.0   8   8.25   7.75

   Set D LQ:   3.5   3.0   3   3.00   3.50
         UQ:   8.5   9.0   9   9.00   8.50

He closes with three links, which are all now broken (16 years later). I’ll be applying each of these methods, and another, to Tom’s data below.

A better method

In 2012, we got an inquiry from a user of a different program, Origin, who had found that it gave results differing from any of those we had shown:

Origin of Origin's Outputs

I have read this question and answer:

  Defining Quartiles 

But another question has arisen. When I draw this box-plot using Origin (version 8.5 of the software), it resulted in

   LQ = 3 and UQ = 7 for data set B = {1, 2, 3, 4, 5, 6, 7, 8, 9}
   LQ = 3 and UQ = 9 for data set D = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}

Apparently, the Origin program included the median when calculating the lower and upper quartiles for data set B, but excluded the median when calculating those values for D.

So Origin seems to follow neither Tukey's method nor others. But what method could it have used for these calculations?

I responded, and after gathering some more information, I did some further research:

You're right, it appears they do not consistently use the M&S method; for even numbers, they might use Tukey or M&M. But since those have a very different style from M&S, I can't imagine they would mix the two. More likely, they follow some other method entirely.

For a larger study of different methods of calculation of quartiles (actually percentiles, the quartiles being the 25th and 75th percentiles), see

    Quartiles in Elementary Statistics (Eric Langford)

Table 2 is similar to Dr. TWE's table, but with slightly smaller sample
data sets. Perhaps if you try those, you will recognize one that agrees with your software. Let me know what values you get for the quartiles.

I had discovered the Langford paper in 2007, noting that it mentioned Doctor TWE’s answer:

What are students to do when they check a MINITAB or SAS or Microsoft Excel calculation on their TI-83 Plus calculator and get a different answer, all of which differ from the answer in the back of the book? This is not an idle concern; a very confused student wrote to the “Ask Dr. Math” section of The Math Forum@Drexel inquiring why his TI-83, Excel, MINITAB, and his own paper-and-pencil calculations all gave different answers for the quartiles of his data set.

Langford lists 12 primary methods, one of which provided a likely answer to our question:

I suspect they may be using the "CDF method" (4):

   METHOD 4 ("CDF"): The Pth percentile value is found as follows.
   Calculate np. If np is an integer, then the Pth percentile value
is the average of #(np) and #(np + 1). If np is not an integer,
the Pth percentile value is #ceil(np); that is, we round up.
Alternatively, one can look at #(np + 0.5) and round off unless
it is half an odd integer, in which case it is left unrounded. As an example, if S5 = {1, 2, 3, 4, 5} and p = 0.25, then
#(np) = 1.25, which is not an integer. So we take the next
largest integer, and hence, Q1 = 2. Using the alternative
calculation, we would look at #(np + 0.5) = #(1.75), which
would again round off to 2. Note that this method can be
considered as "Method 10 with rounding." Translated into Dr. TWE's terms, L = n/4 if it is not an integer, round up and use Lth data point; if it is an integer, average the Lth and (L + 1)th data points U = 3n/4 if it is not an integer, round up and use Lth data point; if it is an integer, average the Lth and (L + 1)th data points For {1, 2, 3, 4, 5, 6, 7, 8}, this gives LQ: L = 8/4 = 2; average 2nd and 3rd data points, giving 2.5 UQ: U = 3*8/4 = 6; average 6th and 7th data points, giving 6.5 For {1, 2, 3, 4, 5, 6, 7, 8, 9}, we get LQ: L = 9/4 = 2.25; round up and use 3rd data point, 3 UQ: U = 3*9/4 = 6.75; round up and use 7th data point, 7 These cases agree with what you've told me.

He wrote back with conformation that his software gave the expected results for this method. I replied with additional details:

Yes, it does look very likely that they are using this method, or at least something equivalent.

Note that at the end of his article, Langford says that this is the best method, and offers an equivalent method that is easy to teach:

   Thus, the following method is equivalent to the CDF
Method 4, yet has the flavor of the Inclusive and
Exclusive Methods 1 and 2, and thus should be more
accessible to students
. SUGGESTED METHOD: Divide the data set into two
halves, a bottom half and a top half. If n is odd,
include or exclude the median in the halves so that
each half has an odd number of elements. The lower
and upper quartiles are then the medians of the
bottom and top halves, respectively. So for these four examples, the work looks like these: --+-- <-- bottom half 1 | 2 | 3 | 4 | | --+-- <-- top half 1.5 2.5 3.5 <-- Q1, Q2, Q3 ----+---- 1 (2) (3) (4) 5 | ----+---- 2 3 4 ----+---- 1 (2) 3 | 4 (5) 6 | | ----+---- 2 3.5 5 ----+---- 1 (2) 3 (4) 5 (6) 7 | | ----+---- 2 4 6 So it looks like your software has made a good choice.

Example

In my excerpts above, I skipped the detailed explanations of the first five methods; let’s demonstrate them with Tom’s data from the first question. The data were

70, 71, 71, 71, 72, 73, 74, 74, 74, 74, 75, 75, 77, 77, 77, 82 (n = 16)

Tukey and M&M/TI-83 (same when n is even): Split into halves at the median, use median of each half as quartiles.

[70, 71, 71, 71, 72, 73, 74, 74], [74, 74, 75, 75, 77, 77, 77, 82]
Q1 = 71.5
Q3 = 76
70 71 71 71|72 73 74 74|74 74 75 75|77 77 77 82

M&S: For Q1, use index L = (1/4)(n+1), round to nearest integer (up if halfway). For Q3, use index U = (3/4)(n+1), round to nearest integer (down if halfway).

L = (1/4)(16+1) = 4.25, rounded to 4; Q1 = 71
U = (3/4)(16+1) = 12.75, rounded to 13; Q3 = 77
70 71 71 71 72 73 74 74|74 74 75 75 77 77 77 82

Minitab: Same as M&S but with linear interpolation:

L = (1/4)(16+1) = 4.25; Q1 = 71 + 0.25(72 – 71) = 71.25
U = (3/4)(16+1) = 12.75; Q3 = 75 + 0.75(77 – 75) = 76.5
70 71 71 71|72 73 74 74|74 74 75 75|77 77 77 82

Excel: For Q1, use index L = (1/4)(n+3) with linear interpolation. For Q3, use index U = (1/4)(3n+1) with linear interpolation.

L = (1/4)(16+3) = 4.75; Q1 = 71 + 0.75(72 – 71) = 71.75
U = (1/4)(3*16+1) = 12.25; Q3 = 75 + 0.25(77 – 75) = 75.5
70 71 71 71|72 73 74 74|74 74 75 75|77 77 77 82

Langford’s method, for n even, is the same as Tukey and M&M. Note that, in Doctor Twe’s explanation, this method agrees with Tukey in case B, with M&M in case D, and with both in cases A and C.

Tom, using his book’s unspecified method, got 71 and 77, which is the same as M&S, so that may be what he was taught.

Note that almost all methods yield quartiles that are in the gaps where we would expect them to be (where I put bars); only M&S, which always uses numbers in the data set, gave quartiles that don’t lie in the gaps. None of them can really be called wrong. And for large data sets, the differences are insignificant.

But, then, what is right?

Definitions vs. methods

In 2015, another question about varying definitions led to an important distinction. (I referred to this previously in my discussion of medians.) Quoting only part of this long and detailed question,

Quartile Conflict

Consider the following sample (n = 10):

   17, 21, 22, 22, 26, 30, 38, 59, 67, 85

Find the median and the lower and upper quartiles (LQ and UQ).

After considering the position of the quartiles, is linear interpolation required for a sample with an even number of data points? This does not seem to be typical, but is it more accurate?

I responded,

The problem is that the definition of quartiles varies. Not all sources will give the same method.

I would follow the method that gives 22 and 59:

   17, 21, 22, 22, 26, 30, 38, 59, 67, 85
           ==         ^        ==

This preferred method takes the median of the two halves; and, in the odd case, either includes or excludes the median as needed to make each "half" odd.

This is discussed at length here:

  Defining Quartiles
  http://mathforum.org/library/drmath/view/60969.html 

I discussed similar issues here, and gave a link to a paper by Langford that I consider definitive:

  Origin of Origin's Outputs
  http://mathforum.org/library/drmath/view/77119.html 

Evidently, your text (or other source) gives a definition based on the location; that approach can accomplish the same results as my method (Langford's recommended method) if you handle rounding appropriately. This is specifically discussed in the second link above. In your example, you would use n rather than n + 1, giving 2.5 and 7.5, then round 2.5 up to 3, and 7.5 up to 8.

Perhaps you can tell me more about the method you are using, and its source.

Then I had second thoughts about how to best answer him:

Perhaps I should add something that explicitly addresses your underlying question of whether or not linear interpolation is more accurate.

The problem here is: what should your quartiles "more accurately" reflect? What is the actual DEFINITION of a quartile that the result of the METHOD must agree with? In what I wrote before, I confused these two different concepts, because texts often present the method as the definition.

This question is addressed somewhere in the links I gave, but can more easily be seen in this answer relating to the same problem in the definition of the median:

  A Closer Look at the Definition of Median
  http://mathforum.org/library/drmath/view/72726.html 

Adapting the definition I gave there to the first quartile:

   The first quartile of a set of data is a number
   such that NO MORE THAN 1/4 are LESS than the first
   quartile, and NO MORE THAN 3/4 are GREATER than
   the first quartile.

The number 22 fits this definition:

   17, 21, 22, 22, 26, 30, 38, 59, 67, 85
     less  ======         greater

   No more than 1/4 of the ten numbers (2 <= 2.5) are less than 22.

   No more than 3/4 of the ten numbers (6 <= 7.5) are greater than 22.

If we used your linear interpolation to get 21.75, would that fit?

   17, 21, 22, 22, 26, 30, 38, 59, 67, 85
    less  ^           greater

   No more than 1/4 of the ten numbers (2 <= 2.5) are less than 21.75.

   MORE than 3/4 of the ten numbers (8 > 7.5) are greater than 21.75.

So in fact, linear interpolation does not fit the definition!

Lesson: what matters is having an adequate definition, and then fitting that definition, rather than applying a method that sounds appropriate -- and I have to agree that your idea does sound good -- without reference to an actual definition!

This discussion continued with an examination of a couple other methods; my ultimate conclusion was that it was appropriate to use the assigned textbook’s method, and just make students aware that (a) there are other options, and (b) it doesn’t make any significant difference in real-life:

It certainly makes sense to follow the authority you are under. It's certainly good that the authors of HIGHER GCSE MATHEMATICS FOR WJEC recognize the variability of the concept; and also emphasize that it is only with small data sets that it matters. But it's disappointing to see them claim their method as the "most accurate," without, apparently, saying what the "exact" result is that they are comparing it to, or how.

So what is right?

In school, as just mentioned, what is “right” is what you are taught; and what is right to teach is what your curriculum says. If you are a curriculum writer, you might want to follow Langford’s advice.

In America, many schools take their direction from the Common Core standards, which apparently has sided with the “M&M” method:

First quartile. For a data set with median M, the first quartile is the median of the data values less than M. Example: For the data set {1, 3, 6, 7, 10, 12, 14, 15, 22, 120}, the first quartile is 6.2 

2Many different methods for computing quartiles are in use. The method defined here is sometimes called the Moore and McCabe method. See Langford, E., “Quartiles in Elementary Statistics,” Journal of Statistics Education Volume 14, Number 3 (2006).

https://www.thecorestandards.org/the-standards/mathematics/glossary/glossary/

I wouldn’t be surprised if this choice is partly motivated by the widespread use of the TI-83 calculator family, as well as by ease of use; they are almost, but not quite, following the recommendation of their source.

But beyond education, it is important to be aware that there are many definitions in use (though they make little difference in large data sets). Wikipedia lists three such definitions (and refers to Doctor Twe’s answer); MathWorld compares five.

1 thought on “The Many Meanings of “Quartile””

  1. Pingback: Boxes, Whiskers, and Outliers – The Math Doctors

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.