1=0? Calculus Says So [or Not]

“False Proofs”, where seemingly good logic leads to nonsensical conclusions, can be a good way to learn the boundaries of reality — what to look out for when you are doing real math. We have a FAQ on the subject; there we discuss several well-known fallacies based in algebra, and have links to others. Today, I will look at some fallacies using differential and integral calculus.The FAQ quotes a summary I wrote:

Of course, these aren't really proofs, because they all have some error in them. What's important about these examples is that they show ways in which you can make a mistake in using math if you aren't careful enough. If you can understand where the error is, then you can look for the same kinds of errors in your own work, whether it's a proof for school or a calculation you make when you're designing a bridge. It also explains why mathematicians and scientists don't publish their results without first having others check them to make sure there isn't some subtle error in their calculations.

Integration by parts

Let’s first look at a fallacy in integration, which teaches a very important lesson. Here is the question, from 2001:

1 = 0 Fallacy

Reading the Dr. Math pages - and especially the ones on 1 = 0 fallacies - I remembered a 'proof' we ran up against during high school (VWO in the Netherlands). It makes use of integral calculus.

We learned the following rule for 'partial integrating':

Int(f(x)*g(x))dx = f(x)*G(x) - Int(f'(x)*G(x))dx

with G(x): the primitive function of g(x) and f'(x): the derivative of f(x) 

Now watch the following 'proof':

  Int(1/x^2 * 2x)dx  =  1/x^2 * x^2 - Int(-2/x^3 * x^2)dx (step 1)

  this yields:

  Int(2/x)dx  =  1 - Int(-2/x)dx  = 1 + Int(2/x)dx        (step 2)

  subtracting Int(2/x)dx on both sides yields:

    0 = 1                                                 (step 3)

Quite remarkable, I think!

We found two arguments that possibly explain the fallacy:

1) 1/x^2 * x^2 = 1 is an invalid step

2) we work with unbounded integrals. If we put lowerbound a 
   and upperbound b to the integral, we get for step 2:
    Int(2/x)dx (a,b)  =  1 (a,b) + Int(2/x)dx (a,b)

which yields:

    Int(2/x)dx (a,b)  =  1(b) - 1(a) + Int(2/x)dx (a,b)

because 1(b) = 1(a) = 1 we get:

    Int(2/x)dx (a,b)  =  Int(2/x)dx (a,b)

which of course is true.

Which of these two arguments tackles the fallacy?

Recall that “integration by parts” uses the formula, \(\displaystyle\int u\ dv = uv\ – \int v du\), or, equivalently, \(\displaystyle\int u\ v^\prime\ dx = uv\ – \int u^\prime\ v\ dx\). It is explained here:

Choosing Factors When Integrating by Parts

What he has done here is to integrate\(\displaystyle\int \left(\frac{1}{x^2}\cdot 2x\right) dx\) by parts, using \(u=\frac{1}{x^2}\) and \(dv = 2x dx\). (In his terms, \(f(x) = \frac{1}{x^2}\), \(g(x) = 2x\), so that \(G(x) = x^2\).) Applying parts, we get:

\(\displaystyle\int \frac{1}{x^2}\cdot 2x dx = \frac{1}{x^2} \cdot x^2 – \int\frac{-2}{x^3} \cdot x^2 dx = 1 – \int\frac{-2}{x} dx = 1 + \int\frac{2}{x} dx\)

But the integral we started with simplifies to

\(\displaystyle\int \frac{2}{x} dx\)

Rather than evaluate this (getting \(2\ln{x}\)), we notice that these two ways of simplifying the integral imply that

\(\displaystyle\int \frac{2}{x} dx = 1 + \int\frac{2}{x} dx\)

Subtracting the integral from both sides, we have 0 = 1!

What went wrong? I replied,

Your second explanation is essentially right. I would say it is an "indefinite" integral, rather than "unbounded." When you work with indefinite integrals, you always have to keep in mind that an arbitrary constant can be added to the result, since differentiation of a constant yields zero. So what you really have is

    Int(2/x)dx + C1  =  1 + Int(2/x)dx + C2

with a constant added to each side. This simplifies to

    C1 = 1 + C2

which of course doesn't say much, since C1 and C2 could be anything. That eliminates the problem entirely.

Your method of turning the integrals into definite integrals amounts to the same thing; evaluating the constant at the limits makes it disappear, so you can ignore it.

This is a very important lesson to learn; for example, we often see students thinking that their answer for an integral is wrong because it doesn’t match the answer in the book, even after simplifying. The answer may be that the two answers differ by a constant. When this happens, the student ought to notice that the derivatives of the two answers are therefore the same (since the derivative of a constant is zero), so both are valid integrals. Here are some examples of this:

Calculus Constants

Constant Oversight

Putting it another way, by parts we got an answer of

\(1 + 2\ln{x} + C\)

while the direct method yielded

\(2\ln{x} + C\)

We just need different values of C to make them match.

More about the constant of integration

We got a very similar question in 2003, with an even simpler integral:

Constant of Integration

Using integration by parts

  integration of (1/x)dx = [x * (1/x)]+ integration of (1/x)dx

After simplifying by using the addition property of equality and multiplication, the answer would lead to 0 = 1, which should be wrong. The proof seems correct.

That is, taking \(u = \frac{1}{x}\) and \(dv = dx\), we get

\(\displaystyle\int \frac{1}{x}\ dx = x \cdot \frac{1}{x}\ – \int\frac{1}{x} dx\)

As before, subtracting the integral from each side, we are left with 0 = 1. Of course, we know the answer now; Doctor Jacques suggested trying the definite integral approach to clarify what was happening:

This looks like a paradox indeed, but try to see what you get if you evaluate the integral over an interval [a,b]...

Please feel free to write back if you are still stuck.

The student did that, but was not convinced about the indefinite form:

The problem is possible in the interval [a,b] but my teacher insists that there is a problem with the proof, while every one of us thinks the proof is correct.

Of course, a proof that 0 = 1 can’t be correct, unless all of math is wrong; presumably they just don’t see where the error is. So Doctor Jacques gave a brief explanation:

The problem is that an indefinite integral (antiderivative) is only defined up to an additive constant. More technically, it is not a single function, but an equivalence class of functions.

For example, INT(0 dx) = C, where C is any constant, since the derivative of a constant is 0.

In this case, we should have written:

 (INT{dx/x} + C_1) = 1 + (INT{dx/x} + C_2)

and this merely shows that the constants must satisfy C_1 = 1 + C_2.

When you compute a "real" integral, i.e. between limits, these constants disappear.

“Defined up to an additive constant” means that answers may differ by that “\(+ C\)” that students are taught to write at the end by rote. So the real answer is not just the function you write, but all functions that can be obtained by using different numbers for the constant — an “equivalence class”.

This still left questions:

As I read your explanation I got confused with the constants. Isn't it that the constant of int(dx/x) on the left-hand side of the equation is equal to the constant at the right-hand side of the equation?

Writing “\(+ C\)” by rote leaves students not really understanding what it is! When I did that above (\(1 + 2\ln{x} + C\) and \(2\ln{x} + C\), using the same name C for the constant in each case), I had to consciously remind myself that C has to be different in each case. That is not obvious unless you think about it.

Doctor Jacques replied with a deeper explanation and a classic example:

These constants have no actual meaning - they are artificial.

When we write

  INT{f(x)dx} = g(x)

we simply mean that the derivative of g(x) is f(x). Of course, the derivative of g(x) + C, where C is _any_ constant, is also f(x).

The particular constant that comes out depends on the method of integration. The whole point of the exercise is to show that different calculations can yield functions that differ by a constant.

We can even illustrate this with the function 1/x in another simpler way. We know that the "true" integral is ln(x).

Now, in INT{dx/x}, if we make the substitution ax = u, with a > 0, you will easily see that the result is

  ln(ax) = ln(x) + ln(a) = ln(x) + constant

and, as we can take any positive number for a, we can make the constant ln(a) anything we wish.

There is no contradiction in writing:

  INT{dx/x} = ln(x)
  INT{dx/x} = ln(x) + C

because these are not true equalities between functions.

This is exactly the same as modular arithmetic. When we write

  2 = 7 (mod 5)

the numbers 2 and 7 are not simple numbers. 2 represents all the numbers that are a multiple of 5 + 2, and 7 represents all the numbers that are a multiple of 5 + 7, and these sets of numbers are the same - that is what the equality means (in this case, we often use a special symbol instead of =, to mean congruence).

In a similar way, an expression like INT{f(x)dx} represents, not a single function, but the set of all functions whose derivative is f(x). An equality between integrals is an equality between sets of functions.

If you are not familiar with modular arithmetic, when we write \(2 \equiv 7 (\text{mod } 5)\), it means that 2 and 7 are equivalent in the sense that their difference is a multiple of 5. They are both representatives of the same “equivalence class”, which consists of the numbers \(\{\dots , -8, -3, 2, 7, 12, \dots\}\). The same idea applies to the indefinite integral: it is really the equivalence class of the function we write, meaning that we can add any constant to it and it will still be equivalent. That is what the “\(+ C\)” means.

A fallacy in differentiation

Let’s move on to the other half of calculus. This question, from 2000, is a classic fallacy using differentiation:

Proof that 2 Equals 1 Using Derivatives

How can this be?

   kx = x + x + ... + x  (k-times)   ......................[1]

   xx = x + x + ... + x  (x-times)   ......................[2]

  x^2 = x + x + ... + x  (x-times)   ......................[3]

  dx(x^2) = 2x  (diff. wrt x)   ...........................[4]

  dx(x + x + ... + x) = 1 + 1 + ... + 1  (x-times)   ......[5]

so 

  2x = 1 + 1 + ... + 1  (x-times) {from eq. [4],[5]}   ....[6]

so we have

  2x = x   ................................................[7]

so

   2 = 1  (x <> 0)   ......................................[8]

Thank you!

Akram starts with the (debatable!) fact that multiplication means repeated addition, letting x itself be the multiplier in order to get the square. Then he differentiates both sides of \(x^2 = x + x + \dots + x\) to get \(2x = x\), so that (dividing by x if it is non-zero), 2 = 1.

Alternatively, at the last step, one could “solve” \(2x = x\) by subtracting x from both sides, yielding \(x = 0\): that is, every number (since x was unspecified) is equal to zero. That would include 1 = 0.

What went wrong?

First, as I read it, when he differentiated (using a nonstandard notation “dx” apparently meaning “d/dx”) \(\underbrace{x + x + \dots + x}_{x\text{ times}}\), he just differentiated each x to get 1, without considering that the number of terms is not constant. If you try to justify this by going to the definition of the derivative, you have to take the difference \(f(x + \Delta x)-f(x)\), which here becomes the difference of sums of different numbers of terms. Doctor Rick took it from there:

In taking the difference, you forgot that not only has each term changed its value, but also the NUMBER of terms has changed. Let's put in some numbers to make this clear. Let x = 3 and delta(x) = 1. Then:

                x^2 = 3 + 3 + 3

     (x+delta(x))^2 = 4 + 4 + 4 + 4

         delta(x^2) = 1 + 1 + 1 + 4

We still don't have the 2x that you expected; we've got 7 instead of 6. Why is this? You forgot something else. 2x is the DERIVATIVE of x^2 - the limit of delta(x^2)/delta(x) as delta(x) approaches zero. But the function as we have defined it (as a sum of x terms) has meaning only for integer values of x, so delta(x) can't be less than 1. The derivative is not defined. All we can define is a DIFFERENCE, as I have done (with delta(x) = 1, the smallest possible value), and this is not equal to 2x.

You can read more about this by going to our Dr. Math FAQ on "False Proofs, Classic Fallacies", linked on our main FAQ page:

   http://mathforum.org/dr.math/faq/faq.false.proof   

At the bottom there is a link to "derivatives," an item in our archives that is directly related to your problem.

The reference at the bottom is to this answer by Doctor Rob:

Derivatives

Really, it’s hard to write something sensible when you try to see what is happening with specific numbers! The notation \(\underbrace{x + x + \dots + x}_{x\text{ times}}\), for the specific numbers 3 and 4 has to mean \(\underbrace{3 + 3 + 3}_{3\text{ times}}\) and \(\underbrace{4 + 4 + 4 +4}_{4\text{ times}}\), respectively. But to take a derivative you have to be able to let x be 3.001, for example, as you take the limit; and it makes no sense to repeat something 3.001 times. The fact is that multiplication can only be thought of as repeated multiplication for whole numbers, so the foundation of the argument is faulty.

A reader in 2001 asked for a further explanation:

In the explanation you gave to show why the proof was wrong,

                x^2 = 3 + 3 + 3

     (x+delta(x))^2 = 4 + 4 + 4 + 4

         delta(x^2) = 1 + 1 + 1 + 4

what is delta(x^2)?

There's a delta(x) = 1, and x = 3, but I thought delta(x) was a term of its own, not a function of x.

Doctor Rick clarified:

I took some shortcuts in my explanation, trying to correct the writer's notation without changing it too much. I'll go through it in a different way for you. 

We're interested in finding the derivative of the function

  f(x) = x^2

using the definition of x^2 as a sum of x copies of x. The claim was that you can differentiate

  f(x) = x + x + x + ... + x (x times)

by taking the derivative of each term and adding:

  df(x)/dx = 1 + 1 + 1 + ... + 1 (x times)
           = x

In my explanation of why this is wrong, I can't really talk about derivatives, because the function has been defined only for integer values of x. Therefore I wrote in terms of finite differences: delta(x) is a finite (integer) change in x, and delta(x^2) is the change in x^2 due to this change in x. Normally we would use the Greek capital delta, and drop the parentheses around the x. You're right, it's not a function. Delta(x^2) is defined formally as follows:

  delta(x^2) = f(x+delta(x)) - f(x)

That is, the derivative is defined as \(\displaystyle\frac{df(x)}{dx} = \lim_{x\rightarrow\Delta x}\frac{f(x + \Delta x)-f(x))}{\Delta x}\). In this case, \(\displaystyle\frac{dx^2}{dx} = \lim_{x\rightarrow\Delta x}\frac{(x + \Delta x)^2-(x)^2)}{\Delta x}\). Our \(\Delta x^2\) is the numerator, \((x + \Delta x)^2-(x)^2)\).

You might prefer it if I talk in terms of independent variable x and dependent variable y:

[1]  y = f(x) = x + ... + x                     (x times)

Let's remain general rather than choosing delta(x) = 1. For any value of x and any change in x, delta(x), we can evaluate f(x+delta(x)), which will differ from y = f(x) by an amount delta(y):

[2]  y + delta(y) = (x+delta(x)) + ... + (x + delta(x)) (x+delta(x) times)

Subtract [1] from [2] to get delta(y):

  delta(y) = delta(x) + ... + delta(x)          (x times)
           + (x+delta(x)) + ... + (x+delta(x))  (delta(x) times)

It's that second line that the writer ignored. Applying the "definition" of (integer) multiplication, we get

  delta(y) = x*delta(x) + delta(x)*(x+delta(x))
           = 2x*delta(x) + delta(x)^2

Then

  delta(y)/delta(x) = 2x + delta(x)

Written out, he has said that \(\Delta y = \underbrace{(x + \Delta x) + (x + \Delta x) + \dots + (x + \Delta x)}_{x + \Delta x\text{ times}} – \underbrace{(x + x + \dots + x)}_{x\text{ times}}\) \(= \underbrace{\Delta x + \Delta x + \dots + \Delta x}_{x\text{ times}} + \underbrace{(x + \Delta x) + (x + \Delta x) + \dots + (x + \Delta x)}_{\Delta x\text{ times}}\).

All this is done, of course, ignoring the fact that \(\Delta x\) has to be an integer, so we can’t really take the limit.

If our function were defined on all real numbers rather than just integers, we would find the derivative by taking the limit of delta(y)/delta(x) as delta(x) approaches zero. The delta(x) term would go away, and we'd get the correct derivative. As it stands, you can see why (in my example in the original explanation) I got a difference of 7 instead of 6: there's an extra term  delta(x)^2 = 1.

Another correspondent suggested that we "define" multiplication for non-integers like this (in my own notation):

  x*y = x + ... + x ([y] times)
      + x*(y-[y])

where [y] is the greatest integer less than y. It's not very helpful, because it only defines multiplication by a number greater than 1 in terms of multiplication by a number less than 1 (namely, y-[y]). However, it does allow us to take delta(x) to zero. If you work through it, you'll find that the derivative works correctly.

Admittedly, much of this is really nonsense. But sometimes it is useful to examine nonsense to see why it is.

Incidentally, we have received at least two dozen questions equivalent to this one (going only by the number of times we referred to this answer). So this is not a rare issue!

1 thought on “1=0? Calculus Says So [or Not]”

  1. Pingback: When Your Answer Doesn’t Match the Book’s – The Math Doctors

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.