More on 0.999…

(An archive question of the week)

In collecting questions and answers about 0.999… for the last post, there were two that were too long to include, but that dig more deeply into issues that some of the standard answers tend to gloss over. So here, I want to look at those two answers, both of which deal with how to handle arithmetic on infinitely many digits: How can you add, subtract, or multiply when there is no rightmost digit? This is not central to the main question, but enough people express uncertainty about this aspect to make it worth covering.

Adding without a place to start

Here’s the first question, from Darryl in 1999 (appropriately enough):

What is 0.999... + 0.999...?

While discussing the 1 = 0.999... solution, a person asked what is 0.999... + 0.999...? I think it is a good question - one that I could not answer. It should be 2 but how do we show this?

In the archives at

  Getting 0.99999...   

you say "Can you figure out why 0.3bar + 0.3bar = 0.6bar? Because these numbers go on forever, you will need to use a little logic to add them. (The algorithm that you learned for adding numbers doesn't work very well when you can't get to the rightmost number.)"

If we copy the idea of limits, we could say that for any number n we could start adding at this rightmost point and get the 0.66...6(nth) place. And since we can do this for any n then the equation holds. But if we try this for 0.9bar + 0.9bar we seem to get something that looks less like 2, namely 1.9....8(nth). But if we replace 8 with 9, we seem to get a number between 2 and the one we have, which makes me feel as if 0.9bar + 0.9bar < 2.

The link is to an elaborated version of the 1/3 + 1/3 + 1/3 = 1 argument, which I only gave a link to last time. (The notation 0.3bar as a typable version of \(0.\overline{3}\), comes from there.) As Doctor Derrel pointed out there, we normally add from right to left, and there is no rightmost digit to start at in 0.333… + 0.333…; fortunately, we know that it isn’t necessary to start at the right when there are no carries, so we can just add left to right. This is discussed here:

Adding Left to Right

Darryl’s suggestion is to consider any n digits, where we can clearly add 0.333…3 + 0.333…3 = 0.666…6, so we get the desired result by letting n increase without bound.

But, as Darryl points out, things get trickier when you try to add 0.999… + 0.999… . With 10 decimal places, for example, we get

+ 0.9999999999

You get a carry into each place from the place to its right, but where can you start that? And if there’s an 8 anywhere, then we’re less than 1.999… = 2, right? The answer, ultimately, is that in fact the 8 is nowhere!

Doctor Rick answered this thoughtful question, starting with a quick answer:

Hi Darryl, thanks for your question. It's an interesting twist on the perennial debate about whether 0.999... could really equal 1.

Since that 8 is the nth digit and you take n to infinity, there is no digit 8 in the limit.

With no rightmost place where you would have 9 + 9 = 18, every digit will be 9. But that’s just impossible to imagine. So we have to do as mathematicians do, and find a way to work around infinities. That way is to use limits formally.

Apply the definition of a limit to what you have. We can express it as a game: you pick a number epsilon, as small as you want, and I have to pick a number n such that

     |2 - 1.999...98| < epsilon

where the 8 is in the nth decimal place. Since

     |2 - 1.999...98| = 2*10^(-n)

I just pick

     n > -log(epsilon/2)

This is no harder than in the case of 0.999...9. The 8 in the last place has no effect on the limit; it only delays somewhat the approach to the limit, as indicated by that factor of 1/2.

The idea here is that we can consider such a sum with any number n of decimal places, which will have an 8 in the last place. We can get this sum to be as close as we want to 2, just by taking enough decimal places; in effect, to get as close as we want, we just have to take enough digits to push the 8 beyond where it would make the error too large. If we want our sum to be less than \(\epsilon = 0.00000000000000000001\) away from 2, we just need to use \(n > -\log(\epsilon/2) = -\log(0.00000000000000000001/2) = 20.3\), so 21 digits will be enough: that is, 0.999999999999999999999 + 0.999999999999999999999 = 1.999999999999999999998.

Here is another way to approach the problem, by manipulating infinite sums. If you want to make everything explicit, you can write the sums as limits of finite sums, and get into the issue of switching the limit and the sum as you go through this process.

Let's write 0.999... as an infinite sum:

     0.999... = Sum[n = 1 to infinity](9*10^-n)

Now we can add infinite sums:

  0.999... + 0.999...
   = Sum[n = 1 to infinity](9*10^-n)
     + Sum[n = 1 to infinity](9*10^-n)

   = Sum[n = 1 to infinity](18*10^-n)

  = Sum[n = 1 to infinity](10*10^-n + 8*10^-n)

  = Sum[n = 1 to infinity](10^(-n+1))
    + Sum[n = 1 to infinity](8*10^-n)

  = Sum[n = 0 to infinity](10^-n)
    + Sum[n = 1 to infinity](8*10^-n)

  = 1 + Sum[n = 1 to infinity](10^-n)
      + Sum[n = 1 to infinity](8*10^-n)

  = 1 + Sum[n = 1 to infinity]((8+1)*10^-n)

  = 1 + Sum[n = 1 to infinity](9*10^-n)

  = 1.999...

I think this is the result you are looking for. Since we know that 0.999... = 1, the answer is 1 + 1 = 2; but you wanted to see that unbroken string of 9's.

He is adding an infinite string of carries (1) to an infinite string of 8’s, and getting an infinite string of 9’s.

By the way, notice that the final 8 never appears in this method. That's because we're dealing with infinite sums from the start, so there is always a carry from the next digit to the right. The sum of 10^-n is the sum of the infinite number of carries.

Multiplying without a place to start

A similar question arises in thinking about the standard method of conversion from a repeating decimal. This question is from 2000:

Induction on .999...

I have been having an argument with my physics teacher over the fact that point nine recurring (.999... or PNR) equals 1. I showed him the proof on your site and he pointed out a fact that I hadn't noticed.

How can you multiply PNR by ten if you can't get to the beginning? For example, to multiply 215 * 3, you would start off on the right, multiplying 3 by 5, then 1, then finally 2. Now how can you multiply PNR by ten if the farthest right value cannot be reached?

Thank you for any clarification you can give.

As you should recall, we convert 0.999… to a fraction by subtracting it (x) from 10x:

      x = 0.9999...  
    10x = 9.9999... 
10x - x = 9.0000... 
     9x = 9 
      x = 1

Here, we first had to multiply 0.9999… by 10. But, as in the addition case above, there is no rightmost digit at which to start the multiplication, so can we actually do that multiplication? That is Blake’s teacher’s challenge.

Doctor TWE took it up, first referring to the FAQ (which includes links to several of the answers I’ve referred to) as a source of supplemental arguments before turning to the main issue.

I wanted to comment on your (or your teacher's) idea that "you can't get to the beginning." We multiply values from right to left only as a matter of convenience, so we don't have to "backtrack" each time we carry. It is equally valid to multiply from left to right. This is often done when we want to get a quick estimate for the answer, then get more accuracy later. The LEFTMOST digits contribute the most to the answer, so to get "in the ballpark," we can multiply them first. I'll demonstrate with a concrete example, then a general argument.

We’ve discussed this left-to-right idea in Dividing Right to Left, Adding Left to Right.

Let's use your example of 215 * 3. I can multiply them as follows; first, I'll multiply the 3 by the 2 (actually, by 200) and put the answer in the hundreds place, like this:

     *   3

Then I'll multiply the 3 by 1 and put the answer in the tens place:

     *   3

Finally, I'll multiply the 3 by 5 and put the answer in the units place:

     *   3

When I add these up, I get the correct answer:

     *   3

Notice that along the way, I got pretty good estimates for my final answer. After the first step my total was 600 (not a bad estimate, off by less than 10%), after the second step I had 630, and after the third step, I got 645. Going right to left, my totals after each step would be: 15 (not a very good estimate of the final answer), 45 (still not very good) and finally 645.

So multiplying left to right just requires us to modify the digits we already have to account for carries from new columns as we do them; but the effect of the new column rarely propagates very far back into our work, and never makes a large change. Each new digit makes a smaller adjustment than the one before. That will be relevant as we continue.

In the general case, consider multiplying a 3-digit value ABC by some value X, where A, B and C are digits. What we really have, then, is

     (100*A + 10*B + 1*C) * X

Conventionally, we solve this starting with the units digit as:

     X*C*1 + X*B*10 + X*A*100

But using the commutative property of addition, this is equal to:

     X*A*100 + X*B*10 + X*C*1

showing that the order doesn't matter. If I wanted to, I could start in the middle, for example:

     X*B*10 + X*A*100 + X*C*1

I'd just have to be careful not to miss any digits or do any ones twice.

What we see here is that, conceptually, we can think of the whole multiplication as being done at once; it doesn’t matter in what order we do the pieces, because the sum is commutative.

Without being able to start with the most significant digit, we could never find a value like 2*pi, because pi (like all irrationals) is an infinite non-repeating decimal. When we say 2*pi is approximately 6.2831853, we can do so because we started multiplying at the end and not the beginning.

One final note: How can we be sure that the multiplications by 10 continue as we expect (i.e. they continue shifting the digits 1 place to the left) and that the subtractions of successive digits produce zeroes infinitely? These steps can be proven using a technique called mathematical induction. That's too complex to explain here, but if you search our Ask Dr. Math archives for the word "induction" (type it without the quotes), you'll find many questions and answers about it.

Whenever we do a calculation on numbers like π in a calculator, we are working only with the leftmost digits (those the calculator can hold); but since we know that the digits we have omitted will add no more than a 1 in the rightmost digit by carrying, the answer will be as close as we need. (In fact, calculators actually work with another digit or two beyond what they display, so that such errors never even show up.)

Proof by induction

That last teaser – that there is a formal method for actually proving infinite things without having to do infinite things – was just what Blake needed. He replied a couple days later (back then we didn’t have threaded conversations, so he didn’t know whether the same Doctor would get the message):

I recently sent you a letter regarding the proof that .999... = 1 in the FAQ. Specifically, I asked how it was possible to multiply PNR by ten when you could not get to the right-hand side of it. This was explained quite neatly.

However, the nice person who helped me out said it could be proven that one can multiply PNR by 10 by using a process of induction. I tried to search the archives, but the results I found there were not very satisfactory because it seemed to me that each induction question was actually problem-specific.

To cut short, I would like to know the induction proof for multiplying PNR by ten.

Cheers for any help you can give!

Doctor TWE first summarized the concept of induction:

Hi again Blake! Thanks for writing back!

In general, we use proof by induction whenever we want a proof that involves an infinite sequence or series, in this case .999...

Inductive proofs require two steps:

Step 1 (the basis step): Prove it for some starting value, like n = 1.

Step 2 (the inductive step): Prove that if it's true for n = k, then it is true for n = k+1.

For the inductive step, we assume that it is true for n = k, and we usually use this assumption in the proof itself.

A word of caution: because of their self-referential nature, inductive proofs are usually hard to follow; it's easy to get lost in the details, losing track of what n, k, and k+1 are supposed to be.

For other explanations and examples of mathematical induction, see

Proof by Mathematical Induction

Sum of n Odd Numbers

Inductive Misunderstanding

Here is the proof. As he’ll explain, rather than taking n as 1, 2, 3, …, he is taking n as the exponent, -1, -2, -3, …, going in the negative direction and so reversing the usual procedure.

In our case, we want to show that when multiplying the nth digit by 10, we get a 9 in the (n+1)st digit, no carry to the (n+2)nd digit, and a 0 in the nth digit (counting from the right.) We want no carry to the (n+2)nd digit so that we don't have to adjust the previous (greater place value) digits multiplied because of a "retroactive carry." We want a 0 in the nth digit so that it will not produce further carries when multiplying the next digit (smaller place value) by 10. Mathematically:

     10 * (9*10^n) = 9 * 10^(n+1)

We want to show that:

   a) the nth digit is 0
   b) the (n+1)st digit is 9
   c) the (n+2)nd digit is 0

To minimize the confusion, I'm going to use n = -1 as my basis step, and show that if it is true for n = k, then it is true for n = k-1 as my inductive step. This is the "negative" of the standard, but it will eliminate the need for using -k's and -(k+1)'s, etc. in my equations.

Step 1: Show that it is the case for n = -1

     10 * [9*10^(-1)]  =  9 * 10^(-1+1)
         10 * (9*0.1)  =  9*10^0
             10 * 0.9  =  9*1
                    9  =  9

   a) the -1st digit (the tenths place) is 0: True
   b) the 0th digit (the units place) is 9: True
   c) the 1st digit (the tens place) is 0: True

So we've proved the basis. Now for the inductive step.

Step 2: Prove that if it's true for n = k, then it's true for n = k-1.

Let's let d = the initial (k+1)st digit. This digit is 0 from conclusion (a) of the previous step. Then:

     10 * (9*10^k) + d  =  9 * 10^(k+1)
     9 * (10*10^k) + 0  =  9 * 10^(k+1)
          9 * 10^(k+1)  =  9 * 10^(k+1)

a) the kth digit is 0: True - 9*10^(k+1) has a 0 in the kth place.
b) the (k+1)st digit is 9: True - 9*10^(k+1) has a 9 in the (k+1)st place
c) the (k+2)nd digit is 0: True - 9*10^(k+1) has a 9 in the (k+2)nd place

So, if 10 times the kth digit is 9*10^(k+1), then 10 times the (k-1)st digit is 9*10^k, and thus 10 times the (k-2)nd digit is 9*10^(k-1), and thus...

Therefore 10 * 0.999... = 9.999...


More could be done to make this foolproof:

Depending on the skepticism of the intended audience, you may have to use a similar inductive proof for the step:

     - 0.999...

For most audiences, the following will do:

     Let x = 0.999... 
     then 9.999... = 9 + x

     so 9.999... - 0.999...  =  (9 + x) - x  = 9

However, some audiences (like your teacher, perhaps) will not be satisfied that algebraic operations work on "infinite strings" (i.e. they won't be convinced that x - x = 0 when x is an infinite repeating decimal.)

I think some of the other explanations discussed last time may be more useful for certain kinds of skeptics; but it’s good to have a variety of approaches, to meet a variety of objections.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.