# Order of Operations: Historical Caveats

To finish up this long series on the order of operations, I want to look at where the “rules” came from, which will also demonstrate why some aspects are not fully agreed upon, finishing up the discussion from last time.

## The “rules” are only descriptive

First, here are a couple paragraphs from the 2017 answer I discussed last time (Even More on Order of Operations), that transition to this final topic:

In talking about the extra “juxtaposition” rule taught in some textbooks, I pointed out,

What many people don't realize is that the "rules" we teach are only an attempt at DESCRIBING what mathematicians did for a long time without explicitly stating what rules they were following. They do not PRESCRIBE what inherently must be done, a priori. In just the same way, English grammar came long after English itself, and has sometimes been taught in a way that is inconsistent with actual practice, in an attempt to make the language seem perfectly rational.

At this point, I referred to the post I’ll be discussing below, on the history of the order of operations. Then I concluded,

In my opinion, the rules as usually taught are not the best possible description of how expressions are evaluated in practice. (This is supported by a recent correspondent who found articles from the early twentieth century arguing that the rules newly being taught in schools misrepresented what mathematicians actually did back then.) Unfortunately, for decades schools have taught PEMDAS as if it must be taken literally, so that one must do all multiplications and divisions from left to right, even when it is entirely unnatural to do so. The better textbooks have avoided such tricky expressions; but others actually drill students in these awkward cases, as if it were important.

I’ll examine that 1917 article later.

## So who made the rules? Nobody.

We’ve often been asked where the rules came from. The fullest answer we’ve given to that question was to this version in 2000:

History of the Order of Operations

I was teaching a computer class and the history of order of operations came up. Where, when and with whom did the order of operations first originate? Was it the Greeks or Romans?

Thank you! There is a whole class waiting to hear the answer.

The problem is that not much is written about this history; I had to pull a few ideas (some of them just speculations) from a variety of areas. My answer is, in fact, given as a reference in the Wikipedia article on the subject. I began:

The Order of Operations rules as we know them could not have existed before algebraic notation existed; but I strongly suspect that they existed in some form from the beginning - in the grammar of how people talked about arithmetic when they had only words, and not symbols, to describe operations. It would be interesting to study that grammar in Greek and Latin writings and see how clearly it can be detected.

As mathematicians through the 17th century gradually moved from stating equations entirely in words, to modern symbolic notation, the grammar of the symbols was part of that development, and likely carried along some of the grammar of their languages. For a quick look at what some of the early notations looked like, see here. Every writer used a slightly different notation, which he explained at the beginning of a book or chapter.

Subsequently, mathematicians just informally and tacitly agreed on how to read their various notations; and textbook authors formalized the “rules”, largely in the 1800’s.

At the other end, I think that computers have influenced the subject, so that it is taught more rigidly now than it used to be, since programming languages have had to define how every expression is to be interpreted. Before then, it was more acceptable to simply recognize some forms, like x/yz, as ambiguous and ignore them - something I think we should do more often today, considering some of the questions we get on such issues.

Many people have written to us, convinced that the rules had changed since they were in school. That is, in fact, possible in some areas! Computers need well-defined rules more than people do, so some details that humans had had no trouble working around were formalized in computer languages, and some of that has leaked back into ordinary mathematical writing and teaching.

I spent some time researching this question, because it is asked frequently, but I have not found a definitive answer yet. We can't say any one person invented the rules, and in some respects they have grown gradually over several centuries and are still evolving.

We’ll see some of that current evolution at the end.

### Hierarchy and grouping

The easiest and earliest part seems to have been the central hierarchy of operations:

1. The basic rule (that multiplication has precedence over addition) appears to have arisen naturally and without much disagreement as algebraic notation was being developed in the 1600s and the need for such conventions arose. Even though there were numerous competing systems of symbols, forcing each author to state his conventions at the start of a book, they seem not to have had to say much in this area. This is probably because the distributive property implies a natural hierarchy in which multiplication is more powerful than addition, and makes it desirable to be able to write polynomials with as few parentheses as possible; without our order of operations, we would have to write

ax^2 + bx + c
as
(a(x^2)) + (bx) + c

It may also be that the concept existed before the symbolism, perhaps just reflecting the natural structure of problems such as the quadratic.

What I’ve said here is closely related to the reasons for the order of operations discussed in Order of Operations: Why These Rules?.

You can see an example of early notation in "Earliest Uses of Grouping Symbols" at:

http://jeff560.tripod.com/grouping.html

where the use of a vinculum (an early version of parentheses) shows, both in its presence (around an additive expression) and its absence (around the multiplicative term "B in D") that the rules were implicitly followed:

In Van Schooten's 1646 edition of Vieta,
________________
B in D quad. + B in D

is used to represent B(D^2 + BD).

The example is also found in Cajori’s A History of Mathematical Notations, Vol 1, p. 182, and again (in a discussion of aggregation, or grouping, symbols) on p. 386.

At this point in the development of notation there was a mixture of words and symbols; multiplication was indicated by the word “in” (I’m not sure why!), and not yet by any of our current symbols (much less by juxtaposition). But in order to ensure that the two terms are added before the multiplication by B, they must be grouped; whereas under the vinculum we clearly have two terms, each formed by multiplication before the addition is performed.

### Not always consistent

2. There were some exceptions early in this development; in particular, math historian Florian Cajori quotes many writers for whom, in the special case of a factorial-like expression such as

n(n-1)(n-2)

the multiplication sign seems to have had some of the effect of an aggregation symbol; they would write

n * n - 1 * n - 2

(using a dot or cross where I have the asterisks) to express this. Yet Cajori points out that this was an exception to a rule already established, by which "nn-1n-2" would be taken as the quadratic "n^2 - n - 2."

This reference is to Cajori Vol. 1, p. 396, where he says, “In $$n\cdot n-1\cdot n-2$$, or $$n\times n-1\times n-2$$, or $$n, n-1, n-2$$, it was understood very generally that the subtractions are performed first, the multiplications later, a practice contrary to that ordinarily followed at that time.”

There was also an early notation in which a multiplication would be replaced by a comma to indicate aggregation:

n, n - 1

would mean

n (n - 1)

whereas

nn-1

meant

n^2 - 1.

This use of commas is explicitly mentioned on page 390. It does seem helpful to have a symbol that combines multiplication and grouping for cases where that is appropriate. Still, this is all a special case distinct from the multiplication-first order that was already well-established.

### Multiplication and division

If the “rules” evolved gradually through usage, it should not be surprising that some are still not fully settled:

3. Some of the specific rules were not yet established in Cajori's own time (the 1920s). He points out that there was disagreement as to whether multiplication should have precedence over division, or whether they should be treated equally. The general rule was that parentheses should be used to clarify one's meaning - which is still a very good rule. I have not yet found any twentieth-century declarations that resolved these issues, so I do not know how they were resolved. You can see this in "Earliest Uses of Symbols of Operation" at:

http://jeff560.tripod.com/operation.html

Cajori makes this statement on page 274.

### Starting to teach rules

4. I suspect that the concept, and especially the term "order of operations" and the "PEMDAS/BEDMAS" mnemonics, was formalized only in this century, or at least in the late 1800s, with the growth of the textbook industry. I think it has been more important to textbook authors than to mathematicians, who have just informally agreed without needing to state anything officially.

By “this century” I meant, somewhat belatedly, the 20th century. I don’t have specific information on the earliest uses of these terms, but I’ll get to one evidence of it below.

### The implicit multiplication controversy

The rules were never decreed “officially”, and even now are unstable, as some parts are not taught consistently (the topic of the last post):

5. There is still some development in this area, as we frequently hear from students and teachers confused by texts that either teach or imply that implicit multiplication (2x) takes precedence over explicit multiplication and division (2*x, 2/x) in expressions such as a/2b, which they would take as a/(2b), contrary to the generally accepted rules. The idea of adding new rules like this implies that the conventions are not yet completely stable; the situation is not all that different from the 1600s.

As in early writings on symbolic algebra, it is still necessary to state the rules one is using!

### Natural rules vs. artificial rules

I concluded that some rules are inherent in the way operations work, and are clearly appropriate, while others are more debatable:

In summary, I would say that the rules actually fall into two categories: the natural rules (such as precedence of exponential over multiplicative over additive operations, and the meaning of parentheses), and the artificial rules (left-to-right evaluation, equal precedence for multiplication and division, and so on). The former were present from the beginning of the notation, and probably existed already, though in a somewhat different form, in the geometric and verbal modes of expression that preceded algebraic symbolism. The latter, not having any absolute reason for their acceptance, have had to be gradually agreed upon through usage, and continue to evolve.

That’s where I left it in 2000.

## The rules were never quite right

In 2017 we had a long discussion (never archived) with a reader named Karen, in the course of which there was a reference to an interesting article by N. J. Lennes in the American Mathematical Monthly of February 1917: Discussions: Relating to the Order of Operations in Algebra. Here are my comments on it:

I agree with some aspects of the article, and in fact said something like it both in my "History of the Order of Operations" and in my comment to you about what my ideal rules would be. When I answer questions about the issue, I take the usual teaching, and the current contradictory rules, for granted, and don't generally dig into whether the rules make sense. But the article is about exactly what I usually leave unsaid.

I generally talk about what we should do given the way the order of operations is currently taught, rather than what it would be if I had my say. Here, I have  my say, because that is what Lennes is doing at a time in history when that was easier.

After quoting some of what I said above on the history, specifically on disagreements over the order of multiplication and division, I continued:

One interesting thing about Cajori's comment is that he only talks about the order of the obelus (÷) and the explicit multiplication sign (Greek cross, ×), and doesn't mention expressions combining the obelus and implicit multiplication (juxtaposition). The same is true of all the references in the Earliest Uses page except the modern example.

The article you found (which I haven't seen before) is from a little before Cajori, and the first section likewise does not mention juxtaposition. It is my impression that the "rules" for order of operation (which as I have mentioned elsewhere are, like many prescriptive "rules" of grammar, really descriptions rather than actual underlying rules) were developed in such a context, using only explicit multiplication, where it feels reasonable since all the signs are the same size! When you start using juxtaposition (as in the second section of your article), things change.

For instance, in $$2\div 3\times x$$, the symbols look similar and separate the numbers by similar distances, whereas in $$2\div 3x$$, the multiplication appears “tighter” and is naturally treated as a single unit. And the latter is where Lennes, but not Cajori, focuses his attention:

As Lennes points out, the "rules" that were (and are now) taught as if they were laws of nature, do not actually reflect what was found in real use, *in cases when juxtaposition is used for multiplication*. The whole idea is really a false extrapolation from what is done in easy cases to a general rule, making everything seem neater that it really should be. (Educators have made that same sort of mistake in other areas as well.) That has led to generations of students being taught a simplistic set of rules that really don't work in mathematicians' own writings. That, ultimately, is what leads to the ambiguity we have been discussing, as people have been forced to fill in the gap between rules and reality in whatever way they can.

When rules don’t fit nature, people don’t follow them.

This is not unlike pseudo-rules of grammar like “never end a sentence with a preposition” that are based on false assumptions about how things work, and not on how real people talk.

Lennes says that Chrystal (whoever he is -- I haven't been able to find such a textbook that might have been an early source of the "order of operations") is careful never to use an obelus followed by a product (which is true of many modern texts as well), but that others do, and interpret, say, 10bc -:- 12a as (10bc)/(12a), so that they are inconsistent with their own stated rules. (My first exposure to this issue came from students asking about similarly inconsistent modern texts.)

Just as I have said about modern textbooks, the best of them avoided examples like this $$10bc\div 12a$$, but those that include them too often failed either to follow their own rules, or to state that they are making an exception, and just evaluate as if it were $$\frac{10bc}{12a}$$. Why? Because they think that rules are rules, but they are too human to really follow them.

I disagree with Lennes in his conclusion, however. He says that the rule should be that all multiplications are to be done first. As I have said to you, if I were free to decree the rule, I would have only implicit multiplication done before divisions, and that perhaps only when the division is expressed with the obelus. Lennes gives no examples of following his rule with explicit multiplication combined with an obelus, which I think would be less convincing. So he is perhaps making the same mistake that Chrystal does.

When you use only one type of example, you can fail to show reality, whichever direction that is.

In the end, he comes to the same sort of comparison I make between math and grammar, saying that treating 12a as the divisor is an "idiom" that must be recognized. As he says, this is a matter not of logic but of history -- it is not something that can be proved, or that can be done by consistently following axioms, but by accurately describing actual use. I agree: the rules as taught are not accurate. I support them only because that is what students learn, and with the strong caveats that parentheses must be used to clarify, and that the obelus is best avoided.

I have taken this position (for example, “the alternative rule is not unreasonable”), ever since my first answer to the question, while also warning against ever writing a division followed by a multiplication (of any sort) without clarifying the meaning by parentheses. It was nice to learn that this view went back a hundred years!

Here is what Lennes says:

Idiom is not exactly the right word here, but the idea is important: The order of operations “rules” are not binding, but should only describe actual usage.

Order of arithmetic operations; in particular, the  48/2(9+3)  question

which as I said, could have been written by me, though its author evidently was new to the issue. As he says of PEMDAS (which he clearly is not familiar with as a teaching tool),

But so far as I know, it is a creation of some educator, who has taken conventions in real use, and extended them to cover cases where there is no accepted convention. … Should there be a standard convention for the relative order of multiplication and division in expressions where division is expressed using a slant?  My feeling is that rather than burdening our memories with a mass of conventions, and setting things up for misinterpretations by people who have not learned them all, we should learn how to be unambiguous, i.e., we should use parentheses except where firmly established conventions exist.  If expressions involving long sequences of multiplications and divisions should in the future become common, then there may be a movement to introduce a standard convention on this point.  (A first stage would involve individual authors writing that “in this work”, expressions of a certain form will have a certain meaning.)  But students should not be told that there is a convention when there isn’t.

It’s good to know I’m not alone in my opinions.

### 2 thoughts on “Order of Operations: Historical Caveats”

1. The difference between handwriting and typewriting mathematics explains many common confusions.

In typewritten notation, which does not normally include a horizonal line to indicate division, these are the two mutually exclusive forms of the algebraic compound fraction notation that can exist: 48(9 + 3)/2 [where the brackets lie above the division line when handwritten] and 48/2(9 + 3) [where the brackets lie below the division line when handwritten].

48(9 + 3)/2 and 48/2 (9 + 2) or 48/2 x (9 + 2) are the same thing but would all be different ways of indicating in typewriting that the brackets are not below the division line in clear terms. But as soon as you compound the brackets with the 2 it’s now in fraction notation and below the division line in handwriting – in fact it’s the only way to typewrite that accurately and proves the convention.

These are partially equations resolved from BODMAS string notation into the more precise algebraic compound notation used for final solution of a written equation. The oblique is not a BODMAS string notation symbol but it being difficult to find and type the traditional BODMAS division symbol in many typed media has led to ambiguity over what an author intends when using an oblique. Traditionally the oblique was and is an acceptable final form algebraic compound notation symbol used to finally record fractions in handwriting that are incapable of further resolution in whole numbers.

Similarly, juxtaposition of letters with numerals without interposing traditional BODMAS string notation symbols such as brackets or multiplication symbols is also an algebraic compound notation convention not a BODMAS string notation convention.

Such conventions and how to convert between them can be as complex as any language but were being taught consistently since at least the beginning of the 1970s in Britain and throughout its former colonies north and south of the equator, i.e. the full span of my education that allows me to attest to this directly. Any confusion over these governing conventions we all were expected to use to translate mathematic equations into our own language within that educational paradigm in order to understand what was intended to be said by the author of the equation is always down to individuals’ imperfect memory of the convention due to lack of frequent use or individuals adopting imprecise homebrew writing conventions later in life and typewriting not being a convenient form of writing in the conventional language of mathematics.

1. Hi, Andrew.
I think you are largely referring to the “implicit multiplication” controversy that is the focus of the end of this post, and all of the previous post.

When you talk about “typewritten” notation, I assume you mean writing all on one line; “handwriting”, as I understand it, is essentially the same as what I would call “typeset” notation as used in books, and allows what you call “algebraic compound fraction notation” with the horizontal fraction bar. I’m not sure if there are other distinctions you are making. You also talk about “BODMAS string notation” (apparently your own invention, as I find the term used nowhere), which I suppose is the same as typeset notation (found in books) but perhaps all on one line, as you distinguish it from “algebraic compound notation”. These nonstandard distinctions make it hard to follow your point.

You appear to be taking “48/2(9 + 3)” to mean $$\frac{48}{2(9+3)}$$, but “48/2 (9 + 3)” with a space to mean $$\frac{48(9+3)}{2}$$. I’m not sure whether you think this is prescribed by some rules, or is a mistake. My recommendation is to use parentheses to make it clear, rather than depending on easily missed subtleties of spacing: “48/(2(9 + 3))” vs “(48/2)(9 + 3)”.

I would agree that the tendency to replace the obelus (÷) with a slash or oblique (/) in typing contributes somewhat to students’ tendency to assume that everything that follows is in the denominator, as if the fraction bar were just being tilted. But the same issue is present with the obelus; none of what I have said applies only to the slash. And if you are implying that BODMAS does not apply to typeset notation, I think you are wrong.

Perhaps my main disagreement with you is that you are assuming a universally accepted, prescriptive convention that people simply forget to follow, whereas I believe that the “rules” are (or should be) descriptive, and the difficulty in this area arises largely from their not adequately describing natural usage. There is no authority that sets rules. The conflicts we see are found even between textbooks, not just in typing, and do not on the whole involve the “oblique”. But it may be different in your part of the world.

If you’d like to discuss this at length, you might use the Ask a Question link to make it more convenient.

This site uses Akismet to reduce spam. Learn how your comment data is processed.