Separable Differential Equations

(A new question of the week)

We received a couple different questions recently about solving differential equations by separation of variables, and why the method is valid. We’ll start with a direct question about it, and then look at an attempt at an alternate perspective using differentials.

Why can we separate variables?

Here is the first question, from teacher Vaneeta, with the title “Differential Equations – separation of variables”:

Dear Dr Math,

It makes sense why I am able to do this but my students have never seen me use this technique before. Is there a way I can explain why we can separate the variables and integrate w.r.t. a different variable on either side of the equation?

Thank you for your help.

An example

Let’s start with an example of this technique, so we can see where Vaneeta’s question comes from. I’ll use one from a site we have often referred students to for lessons: Paul’s Online Notes. Here is the first example from that page:

Solve the following differential equation and determine the interval of validity for the solution.

$$\frac{dy}{dx} = 6y^2x\text{ given that } y(1) = \frac{1}{25}$$

We first move all the y‘s to the left and x‘s to the right, treating the differentials dx and dy as if they were variables: $$y^{-2}dy=6xdx$$

We integrate both sides to get an implicit solution:$$\int y^{-2}dy=\int 6xdx\\ -y^{-1}=3x^2+C$$

Before solving for y, we can find the constant by using the given value \(y(1)=\frac{1}{25}\): $$-\left(\frac{1}{25}\right)^{-1}=3(1)^2+C\\ \\-25=3+C\\ \\C=-28$$

Now we plug in the value of C and solve for y: $$-y^{-1}=3x^2-28\\ y^{-1}=28-3x^2\\ y=\frac{1}{28-3x^2}$$

The problem on the surface

The question is, why can we move differentials around so freely, and why can we integrate with respect to two different variables and expect everything to work out?

Doctor Fenton answered:

Hi Vaneeta,

That’s a very good question, which books don’t usually explain, but rather give the procedural steps.

To be separable, a differential equation must be in a form such as dy/dx = f(x)/g(y), where the  right side can be separated into a product of two factors, each depending upon a single variable.  The procedure starts with separating the variables

g(y) dy = f(x) dx

and then integrates he left side with respect to y, and the right side with respect to x.  As you observe, this doesn’t seem to make sense.

If you read Paul’s lesson, you will have seen that he does start with a brief explanation along the lines of what we’ll be seeing here; but it deserves additional emphasis, as students are likely to skip past it, even when it is presented, because they want to get to the procedure. The procedure he described here is just what we did in our example.

Looking behind the scenes

In order to see the underlying reasoning, we have to slow down and write things a little differently:

However, the variable y is conceptually dependent upon x, so we can write y(x) instead of just y, and avoid treating the derivative dy/dx as a “fraction” (so we can “multiply by dx”).  Then we can write the DE as

g(y) dy/dx = f(x) ,

in which both sides are explicitly functions of x.  Now we can integrate both sides with respect to x:

∫ g(y(x)) dy/dx dx = ∫ f(x) dx  .

This could also be written without anything looking like a fraction at all, as $$\int g(y(x)) y'(x) dx = \int f(x) dx$$

All of this makes sense.  Now, on the the left side, we can make a substitution u = y(x), for which du = dy/dx dx, and the left side becomes

∫ g(u) du .

But this is an indefinite integral, so if we can find an antiderivative G(u) for g(u), we have

G(u) = ∫ f(x) dx .

If F(x) is an antiderivative for f(x), then

G(u) = F(x) + C .

We discussed substitution in an integral in Integration by Substitution. There we see why treating \(du\) and \(dx\) in an integral as entities in themselves (differentials) makes sense.

Don’t forget that u and x are two different, but related, variables; we don’t want to make the mistake discussed two weeks ago in Two Integration Puzzlers!

But u = y(x), so this is really an equation with the independent variable x,

G(y(x)) = F(x) + C,

and this is often written

G(y) = F(x) + C ,

with the explicit dependence of y on x suppressed, so that y is treated as the dependent variable again.

In our example above, \(G(y) = -y^{-1}\) and \(F(x) = 3x^2\). This was our implicit equation for y.

The usual description omits this use of the substitution process to evaluate the integral on the left side, and just treats the derivative dy/dx as a fraction (which we have usually pointed out many times that it is not!) and “multiplies by dx” giving

g(y) dy = f(x) dx

(instead of g(y) dy/dx = f(x) ) and just uses y as a new variable instead of using a new letter u and making the explicit substitution.  You still wind up with the same integration to do,

∫ g(u)du or ∫ g(y)dy ,

but the formal approach is quicker by omitting these steps, and gives the same result:

G(y) = F(x) + C

(which is often an implicit function defining y in terms of x).

This is all it takes to support the method, which is just a shortcut for the substitution. And the fact that we can do this is part of the justification for using the notation \(\frac{dy}{dx}\), as discussed in Why Do People Treat dy/dx as a Fraction? and What Do dx and dy Mean?.

Can you prove it using limits?

Back in April, we had received a very similar question from Kalyan, who related the process directly to the definitions of the derivative and of the (definite) integral, by way of limits:

Hello Doctor,

How is separable variable justified?

dy/dt = t/y

How do we cross multiply here?

lim Δt→0 (Δy/Δt) = t/y

If we remove limit from there,

Δy + εΔt = (t/y)Δt

yΔy + yεΔt = tΔt

How is yεΔt removed and

yΔy  = tΔt

As Δt→0 , Δy→0

so, ΣyΔy = ΣtΔt

therefore, ∫ ydy = ∫ tdt

Is this how I should think about this?

Areas of both sides is equal, how?

Kalyan is using a simple example (with variables t and y rather than x and y), rather than looking for a general proof; that’s a good idea when you are trying to understand a concept.

What he sees is that we can’t really just cross-multiply to get \(\int y dy = \int t dt\), because the derivative isn’t really a fraction. Rather, it is a limit of a fraction. So, how can you justify what appears to be cross-multiplying? And then, how can you integrate (which can be thought of as finding an area) with these limits and deltas in the way?

In terms of the limit defining \(\frac{dy}{dt}\), the differential equation can be thought of as saying that \(\frac{\Delta y}{\Delta t}\) differs from \(\frac{t}{y}\) by a quantity \(\epsilon\) that approaches zero as t approaches zero, so that $$\frac{\Delta y}{\Delta t}+\epsilon = \frac{t}{y}\\ \Delta y+\epsilon\Delta t = \frac{t}{y}\Delta t\\ y\Delta y+y\epsilon\Delta t = t\Delta t$$ By this approach, he hopes to show that \(y\epsilon\Delta t\) approaches zero. Unfortunately, the integrals will be hard to fit into a proof, as it is the definite integral, not the indefinite integral, that is a limit of a summation, and the summations themselves are troublesome, as we’ll see. We won’t be completing this proof, because the other is so much cleaner. But it does give us a little more to discuss.

How it really works, reprise

Doctor Fenton gave a similar answer to the later one above, focusing on the better way to explain the method, rather than Kalyan’s idea:

The way I think about this is the following. The equation

dy/dt = t/y

assumes that y is a differentiable function of t, y(t), and the derivative of this function is t/y(t).

Then, multiplying by y,

y(t) dy/dt = t    ,

so the two functions on the two  sides are the same.  Then integrating gives

∫ y(t) dy/dt dt = ∫ t dt .

If we let u = y(t) be a new variable in the integral on the left side, by the Substitution Theorem the left side becomes

∫ u du = ∫ t dt  ,

so integrating gives u2/2 = t2/2 + C .  But since u = y(t), this is  y(t)2/2 = t2/2 + C.

Since we already know that y is a function of t, we can just write y2/2 = t2/2 + C.

This is the same procedure we saw before. There is really no cross-multiplication to justify, and no integration with respect to different variables.

But we could also just look at the following as a formal process.  Instead of explicitly introducing the new variable u, just think of y as a new variable and write the integrands

y dy = t dt ,

and integrate to  y2/2 = t2/2 + C .

You can just work formally (i.e. just manipulate symbols), and avoid explicitly invoking the Substitution Theorem.

To finish, we can now solve for y by multiplying by 2 and taking the square root: $$\frac{y^2}{2}=\frac{t^2}{2}+C\\ y^2 = t^2 + 2C\\ y=\pm\sqrt{t^2+2C}$$ Given a particular value \(y(t)\), we would determine the appropriate sign and constant. For example, if \(y(0) = -3\), we would find that \(C=\frac{9}{2}\) and the sign must be negative, so that the solution is \(y=-\sqrt{t^2+9}\). To check, $$y’=-\frac{t}{\sqrt{t^2+9}}=\frac{t}{y}$$

But can it be done using limits?

Kalyan’s concern was with his own approach, and whether it could be made to work. He asked again, adding some details:

Hello Doctor,

Is my thought using

[lim Δt→0 (Δy/Δt)] = t/y

Δy + εΔt = (t/y)Δt

yΔy + yεΔt = tΔt

as ε decreases at a faster rate than Δt so I can remove that from the equation, what remains is

yΔy  = tΔt

If I do a summation of the terms in the equation on both sides of it with appropriate limit of approaching infinity, does that not reduce to an integral where I can take the upper limit as x ?

Wrong?

Again, notice that the particular summation is not specified, and involves two different sums, not to mention that summation would yield a definite integral, not the indefinite integral he shows.

Kalyan then added information about the background of his thinking:

I got this above idea from removing the limits from an expression.

For example –

[f(x+Δx) – f(x)] = f(x)Δx + εΔx

as I have removed the limit the expression would have a small error called epsilon. so,

∑ f(x+Δx) – f(x)] = ∑[f(x)Δx + εΔx]

As, ε is quite small, and Δx is also quite small, provided I would not take the limit to make it approach to a value that we call a limiting value.  Will ∑εΔx approach zero? That is my question.

We’ll take a look at this below, though it is not directly related to our topic. This example deals with the Fundamental Theorem of Calculus (FTC), and involves definite integrals, with the same summation on both sides, so that it doesn’t run up against the difficulty Doctor Fenton is emphasizing.

What’s missing in that approach

Doctor Fenton reiterated what he had said, emphasizing the key issue that Kalyan had not dealt with, namely that the summations he is introducing on each side are different (which was Vaneeta’s main concern as well):

Remember that the original post was about justification for the separation of variables for differential equations.  I gave you a justification for that, using the substitution theorem.

You are approaching from a differential viewpoint using difference quotient approximations to the derivative.  You have a formula in y on one side and a formula in t on the other (actually you have Δy+εΔt on the left side, and tΔt on the right).  You are trying to justify ignoring the εΔt on the left without taking a limit.  But the problem is that you still have a formula in y on the left side and one in t on the other.  How do you justify integrating with respect to y on the left side while integrating with respect to t on the other?

What I pointed out is that the DE dy/dt = f(t)/g(y) means that y = y(t) is a function of t.  If you write

g(y(t)) dy(t)/dt = f(t),

then both sides are functions of t, and you can integrate both sides with respect to t.  If G'(y)=g(y), then d/dt (G(y(t)) = g(t)dy/dt

∫ g(y(t)) y'(t) dt

can be evaluating it as ∫ g(y) dy, either by using the Chain Rule (saying that if y = y(t), then d/dt(g(y)) = dg/dy * dy/dt) or the Substitution Rule.

That’s why separation of variables actually works.  Because of the Substitution, an integral with respect to t is equivalent to an integral with respect to y if the integrand is of the correct form: if G'(y) = g(y), then d/dt [G(y)] = g(y) dy/dt, and the integral

∫ g(y)dy/dt dt = G(y(t)).

It’s conceivable that one could express all this in terms of the definitions, by incorporating the proofs of the substitution rule and other facts used here; but this is why we prove theorems from theorems! Basing everything directly on the basics can get impossibly complicated, and is unnecessary.

What about that FTC proof?

Subsequently, Kalyan started a new thread to ask about the Fundamental Theorem of Calculus proof he had based his ideas on:

f(x+Δx) – f(x) = f ‘(x)Δx + εΔx

As I have removed the limit, the expression would have a small error called epsilon.

So, ∑ [f(x+Δx) – f(x)] = ∑ [f ‘(x)Δx + εΔx] .

As ε is quite small, and Δx is also quite small, provided I would not take the limit to make it approach to a value that we call a limiting value.  Will ∑εΔx approach zero? That is my question.

Technically, the summations need to be specified: For what values of x are these terms being summed? But since in this case they are the same on both sides (and we are doing a definite integral, so it really is a summation), Kalyan’s specific question about the deltas is the main obstacle, and we can start with that.

Doctor Rick responded:

Hi Kalyan,

Doctor Fenton has already explained how and why separation of variables works mathematically. You have asked for input about your specific approach, so I will make some relatively informal comments about the idea.

You are going back to the idea of integration as the limit of a summation. Without being rigorous, let’s consider how it will work for a typical well-behaved function — one that has all derivatives in the domain over which you are to integrate. Then the function f(x) can be expanded as a power series in Δx:

f(x+Δx) = f(x) + f ‘(x) Δx + f ”(x) (Δx)2 + …

Now we can write

f(x+Δx) – f(x) = f ‘(x)Δx + (f ”(x) + f ”'(x) Δx + …)(Δx)2

This power series provides a more clearly defined way to talk about an error that “decreases faster” than something else, as we’ll see.

Compare this to your

f(x+Δx) – f(x) = f ‘(x)Δx + εΔx

and you will see that your ε is some quantity (depending on Δx, but presumably finite) multiplied by Δx:

ε = (f ”(x) + f ”'(x) Δx + …)Δx

Let’s call the quantity in parentheses δ; it depends on x and Δx, but we can expect it to be bounded. Then εΔx = δ(Δx)2. As Δx is decreased toward 0, (Δx)2 decreases faster.

That’s what you were trying to say earlier when you wrote, “ε decreases at a faster rate than Δt.” It isn’t that ε itself approaches 0 more rapidly than Δx, but it does approach 0, and the term εΔx (the difference between Δy and dy/dx Δx) approaches 0 more rapidly than Δx. That’s what we need to know.

So far, we have clarified the basis of Kalyan’s informal argument, and his use of epsilon. To deal with the integral, we need to be more specific about what the summations mean:

Now, let’s examine the summation you ask about:

∑ [f(x+Δx) – f(x)] = ∑ [f'(x)Δx + εΔx]

We understand this as shorthand for a summation that, when the limit is taken, will turn into a definite integral (from x = a to x = b). That summation is over “slices” of equal width Δx. On the left we have

(f(a+Δx) – f(a)) + (f(a+2Δx) – f(a + Δx)) + … + (f(b) – f(b – Δx))

which is a telescoping series equal to f(b) – f(a). On the right, the first term of the sum becomes, in the limit, ∫ab f ‘(x) dx.

If you are not familiar with telescoping series, you will find some examples in the post Summing Squares: Finding or Proving a Formula.

The Fundamental Theorem of Calculus, which this work leads to, says that $$\int_a^b f ‘(x) dx = f(b) – f(a)$$

What about the second term, ∑ εΔx? That is what you are asking about. The ε, as I mentioned above, is proportional to Δx: ε = δΔx. Thus the sum is

∑ εΔx = ∑ δ(Δx)2 = (Δx)2 ∑ δ

The sum ∑ δ, being over n = (b – a)/Δx terms, varies as 1/Δx, so that the entire sum ∑ εΔx varies as  Δx and hence goes to 0 in the limit as Δx goes to 0.

I think this answers your question.

So his basic idea of using epsilon has merit, though his attempt at justifying separation of variables failed to deal with the basic issue of integrating with respect to two different variables.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.