How Can Multiplication Make It Smaller?

A fairly common question arises when students learn to multiply or divide fractions and decimals: They discover that multiplication, which always used to make numbers larger (2, multiplied by 3, becomes 6), now can make them smaller (2, multiplied by 1/2, becomes 1). How can that be? Here we’ll look at a few answers we’ve given to this kind of question over the years.

Multiply means increase, right?

Our first question, from Rizwan in 1998, focuses on language:

Multiplying by 1

Dear Dr. Math,

My question is as follows:

Multiply means increase in number. When 1 is multiplied by 1, the answer is 1. The answer is 1. Why is it? Each one independent unit is being multiplied but the number is not increased. Looks erratic to me. Please define.

Doctor Rick answered:

Hello, Rizwan. This is an interesting question, and I can make it seem even stranger. Not only can you multiply by 1 and the result does not increase, but you can also multiply by 1/2 and the result is smaller.

As we’ll see, most people ask this larger question, rather than Rizwan’s tame question about the case where it merely fails to increase!

If you look at the original meanings of words, the same problem arises with the word "add". It comes from the Latin "addere" meaning "to give to." Yet I can add a negative number, with the result that something is actually taken away.

I think the same sorts of problems will arise in any language, and in other disciplines besides math. A word that means one thing in everyday language will have a somewhat different meaning, or a very specific and specialized meaning, in math or physics or economics or another specialized field of study. When people have a new idea or invent a new product, sometimes they invent an entirely new word to identify it. But sometimes they just use an existing word that has a similar meaning. For instance, an electrical current is like a current in a river, but it is not exactly the same.

This is the nature of language! Words grow into bigger meanings than they started with (or narrow down). As “add” in Latin (addere) meant “to join, attach, place upon”, from roots meaning “to give to” (see here), so “multiply” (multiplicare) in Latin meant “to increase”, from roots meaning “having many folds, many times as great in number” (see here). And, of course, “be fruitful and multiply” never meant “become fewer”! But it came to mean something far broader, while still being used with its original sense as well.

The basic words of math like "multiply" and "add" were adapted from everyday life long ago. Back then, concepts like negative numbers and even zero had not been developed. People would really only think in terms of multiplying by positive whole numbers. And why bother to multiply by 1? It doesn't do anything. So the use of the words made sense. 

But mathematicians gradually extended the meanings of the words. Not only can you multiply fractions or negative numbers, you can multiply matrices or numbers in modular arithmetic, where the idea of one number being greater than another is meaningless.

I talked about some of this in my post What is Multiplication … Really? (This also includes comments about repeated addition, which will come up soon.)

The things that we call "multiplication" today have a lot in common with simple multiplication by an integer greater than 1, so it makes sense to use the same word for them. Why invent a new word just because the original narrow meaning of the word doesn't fit any more?

In short, the problem that you have raised is a reason for the existence of specialized dictionaries of science and technology. If you look up the meaning of a word in a dictionary of everyday language and try to apply the definition to the way the word is used in a specialized field like math, you will often only get confused. Just use a word the way it is defined in the field you are working in, and don't worry about what it means in everyday life.

Sometimes speakers of another language try to translate their math terminology into English using an ordinary dictionary (or Google’s similar attempted translation) can stretch our minds trying to see what term they are trying for. Usually I can see the connection and work it out; but it can lead to interesting reflections on word choices. Language is almost as “interesting” as math …

Multiplication is repeated addition, right?

The next question, from Jen in 1997, starts with the initial concept of multiplication, rather than the mere word:

Multiplying Fractions

Since we have defined multiplication as repeated addition, how is it possible that when you multiply two fractions, the product is smaller than either of the fractions?

It is common to introduce multiplication to children as repeated addition, as for example \(3\times 4\) means that we add 3 4’s \((4+4+4=12)\), or add 4 3’s \((3+3+3+3=12)\). A few years ago there was a lot of discussion about whether this definition should be taught at all, because it really only applies to natural numbers, and needs to be replaced with a more general definition as soon as a child gets beyond that. This question illustrates the problem.

Doctor Ken answered, using one alternative definition that extends more easily to fractions:

Hi there -

Perhaps a more useful way to think of multiplication is to use the words "groups of."  For instance, when you have 4 x 5, think of it as "4 groups of 5".  We can write that out as 5 + 5 + 5 + 5, and add it up to get 20.  If we have 1/2 x 2, then we think of it as "half a group of twos", so our total is 1.  If we have 1/3 x 1/4, then that's one-third of a group of 1/4ths, and maybe this will make more sense to you.  One-third of a positive number is always smaller than the original number was.  That's one way to make sense of the equation 1/3 x 1/4  =  1/12.

There was no reply, so we don’t know whether this did make sense to Jen; often it takes some discussion, because it does stretch the mind a bit! If you’re still unsure, read on – one benefit of having many Math Doctors is that they can offer different perspectives, which work for different people.

Think of the decimal as a fraction

Next, a similar question from Carolyn in 2001, this time about decimals rather than fractions:

Multiplying by a Decimal Number

I am having trouble understanding why multiplying by decimal numbers gives a product smaller than the factors. For example, 5 * .43 = 2.15.  I just always thought that multiplying would make the product bigger than the numbers I used.

In this example, the product is smaller than 5 (though larger than 0.43); we’ve multiplied 5 by a number less than 1, which is the key. (In some problems, the result is smaller than both factors, as in \(0.5\times 0.4 = 0.2\).)

Doctor Rick answered:

Hi, Carolyn.

When you only knew about whole numbers, you could make a rule like the one you stated - but not quite! Even when you only knew about whole numbers, there was one exception to your rule: multiplying by 1 does not make a number bigger. When you added zero, that was another exception.

If you multiply by a number greater than 1, the product will be greater than what you started with. If you multiply by 1, the product is the same as what you started with. If you multiply by a number less than 1, the product is less than what you started with.

There is the key concept. Once you have decimals, and then negative numbers, you have lots of numbers that are not greater than 1, so there are far more exceptions to the initial “rule”.

But still, why do we now get a smaller number?

Let's make some sense of this by considering a simple example. 

Multiplying by 1/2 is the same as dividing by 2. Do you understand this? Multiplying by 1/2 means taking half of it; dividing by 2 means cutting it in two pieces and keeping one. These amount to the same thing. Since 1/2 = 0.5, and half of something is less than the whole, it makes sense that multiplying a number by 0.5 makes it smaller.

Fractions make it easy to see what is happening; decimals are just a different way to represent a fraction.

Observe that what he said here is closely related to the fact that adding a negative number (adding \(-2\), say) is the same as subtracting its opposite (subtracting 2), which we know decreases the number.

And just as introducing negative numbers changes the nature of addition, turning addition and subtraction into varieties of the one operation, addition, so introducing fractions changes the nature of multiplication, combining it with division into what amounts to a single operation that can both magnify and reduce a number.

Scaling up and scaling down

Jumping ahead to 2012, we have this question from Justine, digging deeper into the question of decimals:

Multiplication Makes It ... Smaller?!

Hello!

My question is, when you multiply with a decimal that is smaller than 1, why does it give a smaller product?

For example:

      2 x .5  = 1

And

    500 x .25 = 125
    
Isn't multiplying supposed to give a larger product?

My dad says 2 x .5 = 1 is the same as 2/2, which is also 1. Why is multiplying by .5 the same as dividing by 2?

He tries to explain it to me, but I just can't seem to get it!

Related to my question, I think, is when someone says "25% off of 50." That means you don't have to pay the .25. So you can do

   50 x .75 = 37.5 

I don't understand why, though.

When you get 25% off something, that means you get it for a smaller price ... but yet I multiplied? That is kind of strange to me.

Please reply back because this has been bugging me lots. Thanks!

I answered this time:

Hi, Justine.

When you multiply a number by 2, you get a bigger result. That's what you're used to until now.

When you multiply by 1, what do you get? The same number, right? 

Multiplying by a number GREATER than 1 makes it bigger; multiplying by 1 leaves it the same; and, continuing the pattern, multiplying by a number LESS than 1 makes it smaller.

For example, multiplying a number by 1/2 cuts it in half, making it smaller. Multiplying by 0.5 is the same thing.

This is what Justine has observed; but why? I suggested another alternative interpretation of multiplication:

In general, you can think of multiplication as scaling, which can make something either larger or smaller, scaling up or scaling down.

Multiplying by 3 scales up:

   0     1     2     3
   +-----+-----+-----+
   |      \      \      \
   |       \       \       \
   |        \        \        \
   |         \         \         \
   |          \          \          \
   |           \           \           \
   |            \            \            \
   |             \             \             \
   |              \              \              \
   |               \               \               \
   |                \                \                \
   +-----------------+-----------------+-----------------+
   0                 3                 6                 9

This enlargement turned 1 into 3.

Multiplying by 1/3 scales down:

   0     1     2     3     4     5     6     7     8     9
   +-----+-----+-----+-----+-----+-----+-----+-----+-----+
   |                /                /                /
   |               /               /               /
   |              /              /              /
   |             /             /             /
   |            /            /            /
   |           /           /           /
   |          /          /          /
   |         /         /         /
   |        /        /        /
   |       /       /       /
   |      /      /      /
   +-+-+-+-+-+-+-+-+-+
   0     1     2     3

This shrinking turned 1 into 1/3, and 3 into 1. Notice that multiplying by 1/3 undoes a multiplication by 3; it divides by 3.

Multiplying by some number k turns 1 into k, and either stretches or compresses the whole number line proportionally. Multiplying by 3 makes everything 3 times as big; multiplying by 1/3 makes everything 1/3 as big, which is smaller, just as 1/3 is smaller than 1.

Justine replied,

Thanks so much! Your answer really cleared up my confusion.

So now I won't get bugged by this question anymore.

Sometimes a picture says more than words can.

I realized in writing this that I never did answer the last part of Justine’s question, about “25% off of 50.” There, we are actually doing two things: multiplying by 25%, or 0.25, to find the amount of the discount (25% of 50, which is 12.5), and then subtracting that from the starting number, 50: \(50-12.5 = 37.5\). It’s really that subtraction that reduces the amount (though the multiplication has to result in a smaller amount, or we’d end up going negative!)

The shortcut she used, subtracting 25% from 100%, works because the whole process involves subtracting 25% of 50 from 100% of 50, which leaves 75% of 50: \(0.75\times 50=37.5\). This can also be explained using the distributive property from algebra, as \(50-0.25(50) = 1.00(50)-0.25(50)=(1.00-0.25)(50)=0.75(50)=37.5\).

Division means sharing, right?

Now let’s turn to the other side of the question, with this from Emily in 2001:

Is Division Sharing?

Hi, Dr Math,

Why when I do a fraction division (e.g. 1/2 divided by 1/2 = 1) is the answer bigger than the first fraction in question (i.e. 1/2)? Shouldn't division mean sharing, so logically the answer is smaller than the first fraction?

Thanks, Emily

Emily is probably picturing one particular “model” (use) of division, in which dividing 6 by 2 means asking how much of the 6 goes to each of 2 people. (This is also called partitive, or sharing, division, finding the size of each of 2 parts; another model is quotative, or measurement, division, which asks how many parts you get if each part has 2 items out of the 6; both of these uses of division, as long as we use whole numbers, expect the result to be smaller than the given numbers. For more on these terms, see toward the end of my post, Dividing Fractions: How and Why.)

Doctor Ian answered, again offering a more general model:

Hi Emily,

It's not strictly true that 'division means sharing'. In fact, division is just another way of looking at multiplication. For example, all of the following are just different ways of saying the same thing:

  3 times 4 equals 12. 
  4 times 3 equals 12.
  12 divided by 3 equals 4.
  12 divided by 4 equals 3. 

It's sort of like looking at this picture,

     /\
    /  \
   /    \
  +------+
  |      |
  |      |
  +------+

and knowing that both of the following descriptions

  The triangle is above the square. 

  The square is below the triangle. 

are equally true, since they are really just different ways of saying the same thing.

So division is just looking at a multiplication backward, asking what one of the factors is. Technically, it is the inverse of multiplication.

In other words, whenever it is true that

  a * b = c

it must also be true that

  c / a = b

and

  c / b = a

(unless either a or b is zero, in which case all bets are off).  That's what we _mean_ by division.

(Unintentionally, this ties in with last week’s new question, which involved why division by 0 is undefined.)

Now, let's think about what this means for division by 1/2.  If

  c / (1/2) = a

then it must be true that

          c = a * (1/2)

which means that a must be _larger_ than c.

We can think of this in terms of sharing; if we divide 5 by 1/2, for example, using the quotative model we might be asking how many people we can divide 5 apples among, if each gets half an apple. When we cut each apple in half, we end up with 10 halves, which we can share among 10 people: \(5\div\frac{1}{2} = 5\times 2 = 10\). It’s harder to think of this in terms of partitive division, which would mean asking how much to put in “each pile” if we want to make 1/2 a pile. There are problems in which that would make some sense; but it’s better to just think in terms of what division is, as the inverse of multiplication.

Thinking about division as 'sharing' is one way that teachers try to make the concept simpler for students to understand. But it's important to realize that these simplifications usually lead you in the wrong direction once you start looking at things more carefully.

Just as words “grow up” and take on broader meanings, our understanding of a concept may have to grow up, to accommodate those new meanings.

Think of the decimal as a fraction, again

We’ll look at one more question, from Jen in 2003:

How Can Division Result in an Increase?

What is the logic behind dividing an integer by a decimal and getting a larger number?  For example, 

  100 / 0.9185 = 108.87

I just can't understand the logic!  It seems like the answer should be a smaller number.

Doctor Ian answered again:

Hi Jen,

Well, remember that a decimal is just a fraction.  For example, 

  0.23   = 23/100

  0.1542 = 1542/10,000

and so on.  So what happens when we divide by a fraction?  We multiply by the reciprocal:

    8         3
  ----- = 8 * -
   2/3        2

So if we divide by something where the denominator is smaller than the numerator, we'll end up multiplying by something where the numerator is larger than the denominator.  Does that make sense?

In other words, dividing by a number less than 1 means the same as multiplying by a number greater than 1, which results in a larger number.

That was the abstract view. Since students commonly see things better concretely, we can turn to a model:

Here's another way to think about it, using the idea that division means breaking things into pieces.  Suppose I have something 6 inches long, and I divide it into pieces that are 2 inches long.  How many pieces do I get? 

     1    
   +----+----+ +----+----+ +----+----+ 
   |    |    | |    |    | |    |    |         
   +----+----+ +----+----+ +----+----+ 

            6 / 2 = 3

This is the quotative model I mentioned before, in the form of measurement.

Okay, now what if I divide it into pieces that are only 1/2 inch long?  I'm going to end up with 12 pieces, right? 

    1
    -
    2
   +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+  
   |  | |  | |  | |  | |  | |  | |  | |  | |  | |  | |  | |  |
   +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+  
   
            6 / (1/2) = 12

So we divided by something less than one, and ended up with more than our original result.

How many pieces we get increases when the pieces are smaller.

We always need to come back to the general definition, which is abstract:

But the clearest way to see the logic of this is to remember what we _mean_ by division.  That is, a division is just another way of representing a multiplication.  For example, when we say that

  3 * 4 = 12

two other ways to say exactly the same thing are

  12 / 3 = 4         and          12 / 4 = 3

So suppose we have something like 
 
  6 / (1/2) = ?

This is the same as saying that 

          6 = ? * (1/2)

Now, if this is true, then the value of '?' had better be larger than 6, right?

Again, since multiplying by a number less than 1 results in a smaller number, the unknown factor here has to be larger. Our two questions in this post, about multiplication and division, end up being one question.

3 thoughts on “How Can Multiplication Make It Smaller?”

  1. Pingback: Multiplying Fractions – The Math Doctors

  2. Pingback: Dividing Fractions: Can You Picture It? – The Math Doctors

  3. Pingback: A Hole in a Cube – The Math Doctors

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.