Significant Digits: Digging Deeper

We’ve looked at the basic concept of significant digits, then at how they interact with operations, which is one reason for defining them. This time I want to look a little closer at why they are defined as they are, which will involve considering some special cases.

When do zeros count, and why?

I’ll start with discussion that I would have put in the first post of the series if it had fit, because it will be a good bridge into new issues. Here is the question, from 2005:

Significant Digits in Numbers Written in Scientific Notation

Say you have the number 2.34 * 10^4.  Why does my textbook say there are 3 significant digits, rather than 5 (from 23400--I'm assuming that the last two zeros are also significant)?

I have mentioned previously that significant digits are more at home in scientific notation, where we only write digits that are significant. Geoffrey is reversing that, thinking that converting to ordinary notation reveals more. I replied,

You're just making a wrong assumption: the whole point of significant digits is that zeros you write only to get the other digits in the right places are NOT significant.  When we write a number in scientific notation, we write ONLY the digits whose values we really know, so that all the digits written are in fact significant, and no others.  In ordinary notation, you can't be sure of that.  That is one major advantage of scientific notation.

Now, IF you had measured something as exactly 23400 meters, say (to the nearest meter), then you would write it in scientific notation as 2.3400 * 10^4.  Then the zeros would be significant.  But when it is written as 2.34 * 10^4, we take that to mean that, even if you write it as 23400, the zeros are not known to be accurate, and should not be taken as being significant.

That question was about zeros on the right; Geoffrey next asked a question that involved zeros on the left:

Thanks.  Here's another example from my book that I still don't understand:

"Given the radius of a sphere, 8.6 cm, calculate the volume. 

So V = 4/3pi(0.086m)^3 = 2.6643 * 10^(-3) m^3 = 2.7 * 10^(-3) m^3 "

"Here, the result has been rounded to two digits because the radius has two significant digits."

But I would first multiply 2.6643 * 10^(-3) = 0.006643, and then round to the required two significant digits giving the number of "0.0". What's wrong here?

Here he has counted the zeros on the left as significant. After referring to one of the pages I posted previously, I pointed out the specific error:

You are using the wrong digits.  Significant digits are digits from the first to the last NONZERO digit, except that in some cases zeros on the RIGHT may be significant if they are obtained from an actual measurement to be exactly zero.

In the number 0.0026643, the zeros are NOT significant, because they are there only to keep the nonzero digits in the right place.  If you write it in scientific notation as 2.6643 * 10^-3, it is clearer that the first two significant digits are the 2 and 6, and rounding it to two significant digits gives you 2.7 * 10^-3 = 0.0027.

Again, scientific notation makes things clearer; he should have just left the number as it is. (I imagine he was not yet comfortable with scientific notation, which would explain the impulse to convert everything before thinking about it.)

He wrote again, asking for clarification:

I don't understand when you say, "the zeros are NOT significant, because they are there only to keep the nonzero digits in the right place."

If I'm correct, the significant digits in a number are the digits that have been measured accurately, so I would say that the two zeroes in 0.024 are also significant digits.(?)

Aha! We have a contradiction between the simple definition of significant digits I had given, and the rule. Zeros on the left are accurate, so why not count them? (Ignore the fact that we could add infinitely many zeros on the left …) This is where we have to start digging a little deeper:

It can be hard to explain the precise definition of significant digits clearly, and also to grasp the concept for the first time!  I hope if you read more of our explanations you will find one that works for you.

Zeros on the left are not significant because they are not written for the same reason as zeros on the right.  I decide to cut off a number after a certain digit because any digits to the right are not known accurately: 0.0120 differs from 0.012 in that, for the former, I know that the zero on the right is correct, and for the latter, I don't. Zeros on the left do, I suppose, tell you that those digits are not something other than zero, but you wouldn't change them if you made a more accurate measurement.  The change from 0.012 to 0.112 is not just an improvement in accuracy, but a complete change in the value.

Precision, or accuracy, is all about the rightmost digits. Zeros on the left must be written as “scaffolding”, as I explained previously, not as a matter of accuracy. But there’s more:

But ultimately, you have to recognize that we define significant digits as we do not because the word "significant" forces us to define it that way, but just because, as some of the explanations we have given explain, that is the definition that works to give us a rule of thumb for determining the precision of the result of a multiplication. The leftmost nonzero digit indicates the size of the number, and the rightmost one indicates the finest detail measured.  The distance between those digits indicates the relative accuracy, and that is what we need to use.

So the significant digits are all the digits starting at the first non-zero digit, and ending at the last digit that would be written in scientific notation--the last non-zero digit, or possibly the last zero that is written because it represents an accurate measurement.

This is the real reason: Significant digits give us a rough measure of relative precision, the ratio of possible error to magnitude of the number.

That is what we’ll be exploring in the next few sections.

What if all the digits are zero??

To bring details out in the open, there is nothing like an extreme case! This question from 2001 did that:

Significant Non-Zero Digits

We were learning about significant digits in my Algebra II class today, and our teacher asked this question:  

How many significant digits are there in a number with no non-zero digits?  

Example:  00.000  Are there any?

We had a pretty good discussion about this, but didn't come up with a definite answer. Thanks for your help.

I love to hear about classroom discussions; but you can’t really find an answer without a solid definition to base it on. So we needed a real definition, not just rules. I replied:

My first impression is that there are no significant digits, and I think a consideration of the meaning of significant digits will confirm that.

First impressions, like class discussions, can be dangerous. As we’ll see, I came close …

First, I had to come up with a definition. What I say here will need a slight correction further down the page, but it’s a good start. I began with a definition of relative precision, which as I said above is the foundation of the concept:

The concept of significant digits is really just a simple rule-of-thumb representation of relative precision. A number with, say, three significant digits, such as 1.23, represents the range 1.225 to 1.235, with a relative error of 0.005/1.23 = 0.004. A number with four significant digits would have about a tenth of that relative error. We can define "number of significant digits" more precisely as the negative common logarithm of this relative error, in this case 2.39:

  SD = -log(error/value)

     = -log(0.005/1.23) = -log(0.004) = 2.39

This is less than 3 because our number is smaller than the average three-digit number; for 5.00 we would get -log(0.005/5.00) = 3 exactly.

This log says that the error is \(\frac{1}{246} = \frac{1}{10^{2.39}}\) of the number’s value; rounding the result up gives us 3 significant digits, which agrees with the way 1.23 is written. The fact that we have to round reflects the fact that this is only a rough estimate. In counting significant digits, we are making a quick estimate of relative error, which is good as a rule of thumb, but not sufficient if you need to be really careful about precision.

Now that we have a precise definition (sort of), we can apply that to the question at hand:

If the number itself is zero, then you can't talk about relative error at all, since you can't divide by zero. Therefore the concept of significant digits is meaningless in this case. If we apply my definition, we get -log(0.0005/0) = -infinity, not 0. So I guess I'm glad I said "no significant digits" rather than "zero," because the proper answer is that the number of significant digits in this case is UNDEFINED!

The next morning, having done some research and further thinking, I realized that what I had defined was not really the number of significant digits, but something else called LRE, so I wrote back to make the distinction clearer:

I want to clarify one thing in case it confuses you, though it is not central to what I said, and may be more for your teacher's interest.

The log-of-relative-error (LRE) I discussed is not actually a definition of significant digits, but a more exact measure of precision for which the number of significant digits is used as an estimate. You can actually define the number of significant digits this way:

  SD = floor(log(value/next_digit))

Here, if you are unfamiliar with it, the floor function amounts to rounding down to the nearest integer. What follows is similar to what I had said in my examples, but now I round down:

What I mean by this is that if you divide the number (say, 1.23) by a number formed by putting a 1 in the first decimal place NOT written in the number (in this case, 0.001), take the common logarithm, and then take the "floor" of this log (the greatest integer that does not exceed it), you get the number of significant digits:

  SD = floor(log(1.23/0.001))
     = floor(log(1230))
     = floor(3.0899)
     = 3

This is related to the LRE by this formula:

  SD = floor(LRE + log(5))

This is true because we can rewrite the definition of LRE as

  LRE = -log(error/value)
      = log(value/(5*next_digit))

This corrects for the fact I pointed out, that the LRE is low for numbers whose first digit is less than 5. It doesn't affect my conclusion about zero.

When the number is zero, LRE is the log of zero (undefined), so SD by the new definition is still undefined.

We’ll see more below about how the first digit affects the validity of SD.

So is there a difference between 0.0 and 0.000?

A 2013 question took this issue of zero a step further:

Zero Significance

Are zeros following a decimal point still significant in this case?

   0.000

Is the integer part (zero) also significant in that case?

I find 0.000 confusing example because it looks to me like precision "signature" would be lost if post-decimal zeros are discarded. Consider the example of adding two decimals:

    0.000   
  + 0.31
    -----
    0.310
    
This would show that a calculation was precise to the nearest .001.

If 0.000 is treated as single zero, wouldn't the result have one less digit than it has to? Wouldn't the next operation on 0.31 create a possible unwanted truncation or rounding of a number more than expected earlier?

Also, in the example of 0.000, what happens with the leading zero before the decimal point? Is it by any chance a special case when it is significant?

If you’ve been following this series, you should see the problem: Ben is asking about significant digits, but the example is about addition, where significant digits are not the relevant measure of precision.

I started with the basic question, referring to the previous answer:

There are no significant figures in 0.000 or in any representation of zero. Technically, the number of "sigfigs" is undefined in this case, as explained here:

  Significant Non-Zero Digits
  http://mathforum.org/library/drmath/view/55525.html 

But the number of decimal places displayed IS "significant" -- that is, this representation does imply that it is precise to the nearest thousandth. (I'd never considered this before: This example shows that the two concepts are so distinct that one may exist when the other doesn't.)

This is one of those things that you never think of until someone asks the right question, and is part of the fun of being a Math Doctor!

Keep in mind that when you add, it is not the number of significant figures that matters, but the decimal places. If you had been multiplying by 0.000, the result would be 0.00000, which would still have an undefined number of significant figures, so everything is consistent.

Again, this makes it even clearer that the two concepts are entirely different, and must not be confused.

Instability of significant digits

We’ll close with a question that leads into the point I made earlier about the effect of the leading digit. This comes from 2016:

Rounding Round Numbers

What is the lower bound of 100 rounded to 1 significant figure?

I think the answer should be 95, as 95 is the smallest number which gives 100 when rounded to 1 significant figure.

But a teacher colleague of mine argues that the lower bound is 50, as from a measurement point of view, the smallest unit of measure is 100; and any measurement between 0 and 100 and above the midpoint (50) will be rounded up to 100.

David’s question is not quite clear, but presumably means, “If you take 100 as having been rounded to one significant figure, what is the lowest possible value before rounding?” The hard part is that if you round to one significant figure, some odd things can happen, as seen here:

Decimal Numbers and Significant Figures

I started with some general caveats:

I need to ask for some context and further information, because the wording of the question feels wrong in the first place. Can you quote some similar questions the answers to which you are sure of, and tell me about the context? Have you been given any definitions or rules that you are expected to follow?

Significant figures are just a rule of thumb, and numbers close to a power of ten are problematic in many situations. So there may not be a "correct" answer here. But if the question is rephrased, it can be changed into one that is less ambiguous. I think that is what you and your teacher colleague have done, ultimately: each of you has answered a different rewording of the question.

Then I reworded the question as I did above, pointing out the oddity:

If the question is what is the smallest number that would round to 100 when rounded to 1 sigfig, then you are right that it is 95; 50 would round to itself, not to 100. But this is a little troublesome, because the first sigfig in 95 is the tens place, while rounding it to the nearest ten gives 100, which -- if the tens place is considered significant -- has TWO sigfigs!

Do we look at the number of significant digits as shown in the answer, or as desired? This is tricky when rounding gives the number an extra digit.

The example becomes clearer in scientific notation. If we round \(9.5\times 10^1\) to one significant digit (nearest ten), we get \(1.0\times 10^2\), in which the 0 is significant, but therefore the result clearly has two, not one, significant digits.

And that is probably related to your colleague's thinking. If we focus on the significant digit in 100 itself, then the smallest number that would round to 100 when rounded *to the nearest hundred* is 50!

You can think of 100 and other powers of ten as points of discontinuity in significant figures, where the rules do not fit the intention of the concept -- namely, to represent relative error. For this reason, some sources recommend bending the rules for numbers near a power of ten by keeping an extra digit.

This phenomenon is due to the rounding (floor) of the LRE that we saw in the formal definition. We are crossing the threshold from rounding down to rounding up, which pops SD up by 1.

So the question ends up being, WHY are you asking? What would you do differently if the answer were 95 versus if it were 50? Does it really matter, or are you just asking what an arbitrary rule decrees, without regard to meaning or usefulness?

... because, ultimately, the details of the sigfig rules are arbitrary, and are only a rough approximation of proper error propagation considerations.

David answered my question with an example of the sort of problem he had in mind:

Thank you very much for your input on this. I agree that context is important here, given the ambiguity of the question.

In the Mathematics syllabus of the Cambridge International General Certificate of School Education (IGCSE), students need to learn about the lower and upper bounds of numbers rounded to a number of decimal places and significant figures.

IGCSE questions usually are in this form: "The length of a rectangle is 14 cm (to the nearest integer) and its width is 15.8 cm (to 3 significant figures). Find the lower bound of its area."

In this case, the lower bound of the area would be the product of the lower bound of the length (13.5 cm) and the lower bound of the width (15.75 cm).

Thank you for pointing out the arbitrariness in rounding numbers to significant figures; and that usefulness and meaning of the process is more important than simply following what an arbitrary rule states.

I applied the example to the original question:

Now suppose that your IGCSE question said this:

  The length of a rectangle is 14 cm (to the nearest
  integer) and its width is 10.0 cm (to 3 significant
  figures). Find the lower bound of its area.

I would then say that the lower bounds are 13.5 cm and 9.95 cm, since the question gives the 10.0 cm measurement as good to 3 significant figures. But I would also be aware that if one were rounding 9.95 to three significant figures, one would write 9.95, not 10.0; so in real life, I would doubt the meaningfulness of the result. And if it had been 1 cm to 1 significant figure, I would have used 0.5 cm with the same concerns.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.