The concept of independent events can be both very simple and easily misunderstood. We’ll be looking at several explanations of the idea, starting with the basics and then digging into some deeper questions that are often overlooked.

## What is independence?

We can start with this question from 1998, asking for the basics:

Independent and Dependent Events How do I find the probability of an independent and dependent event? Can you explain independent and dependent events to me?

Doctor Margaret took the occasion to give Kerry a thorough introduction to the topic as a beginning student would see it (with discrete probabilities).

Hi Kerry, Thanks for writing to Dr. Math. First let's talk a little about thesample space. The sample space is the set of all possible outcomes for an event or experiment. For example, the sample space of a die (one of a pair of dice) is six: S = {1, 2, 3, 4, 5, 6}. Each number is what you would see on each side of the die. The sample space for a coin is two: S = {H, T}, H for heads and T for tails.

### Independent events: coin and die

Now let's see if we can understand the idea of an independent event first. Informally speaking, we say thattwo events A and B are independent if when one of them happens, it doesn't affect the other one happening or not.Let's use a real life example.

One thing that is often misunderstood is that independence is about whether one event changes the **probability** of the other happening — not whether it affects **how** it will happen. We’ll see that later. We’ll also be talking later about whether it matters which event we start with, or whether the two events have to relate to separate actions (such as two different dice), or can be part of one action.

Doctor Margaret also quietly transitioned from the incorrect terminology of “an independent event” to “independent events”, as the concept deals with the relationship between two events, not a single event.

Let's say that you have a coin and a die (one of a pair of dice). You want to find the probability of tossing the coin, getting heads one time, and then tossing the die and getting a five one time. We'll call the coin toss event A. The plain old probability of tossing a coin and getting heads is 1/2. That is: The number of favorable outcomes 1 A = -------------------------------------- = --- Total possible outcomes (Sample space) 2 The probability of getting a five when you toss the die will be event B and that is: The number of favorable outcomes 1 B = -------------------------------------- = --- Total possible outcomes (Sample space) 6 Now for the independent part.Does your chance of getting a five when you toss the die have anything to do with whether you get heads or tails when you toss the coin?It does not. That's why they areindependent.

This is the idea underlying the concept of independence: The coin can’t affect what happens to the die, or the die affect what happens to the coin. But we’ll see as we continue that this is not the *only* way independence can occur.

The probability of independent events occurring is found by multiplying the probability of the first event occurring by the probability of the second event occurring. Generally, it looks like this: P(A,B) = P(A) * P(B) In our example it looks like this: P(H,5) = P(A) * P(B) = 1/2 * 1/6 = 1/12

Observe that Doctor Margaret is using the notation \(P(A, B)\) for “the probability that A happens, then B”. Since the order doesn’t really matter in this case, it could also be called \(P(A \text{ and } B)\).

We discussed this idea of multiplication for “A and B” in When Do I Add or Multiply in Probability? and WHY Do We Add or Multiply in Probability? The main idea is that 1/6 of all rolls of the die will be a 5, and since they are independent, that includes 1/6 of the 1/2 of all tosses of the coin that will be heads. And 1/6 of 1/2 is multiplication.

### Dependent events: marbles in a box

Now for dependent events.A dependent event is one where the outcome of the second event is influenced by the outcome of the first event.For example, let's say we have a box with 6 marbles: 3 red, 1 blue, 1 green and 1 yellow. What's the probability of picking a yellow marble? We know that probability is 1/6. What's the probability of picking a blue marble? Can it be 1/6 also? Well, it could beif we put backthe first marble we picked. Butif we don't put backthe first marble, our sample space will have changed. We started with six marbles, picked one, and now we only have five marbles in the sample space, so the probability of picking a blue marble is now 1/5. And in such a case we have dependent events, because something about the first one changed the second one.

In this case, the second event is not independent of the first, because the first changed the sample space for the second. Now, that doesn’t always have to happen; it’s possible that the *sample space* can be changed in a way that doesn’t change the *probability*. But typically, “**selection with replacement**” leads to **independent** events, while “**selection without replacement**” leads to **dependent** events.

The probability of two dependent events occurring, one right after the other, is still found by using the same formula: P(A,B) = P(A) * P(B) The big difference is that the individual probabilities won't have the same sample spaces. So from our example, what is the probability of picking a yellow marble and then a blue marble, without putting the first marble back? P(Yellow) = 1/6 P(Blue) = 1/5 P(Y,B) = 1/6 * 1/5 = 1/30

This time, order matters a lot; the fact that *Y* is a result of an action that is done first (and affects the probability for *B*) means that it was necessary to use the notation \(P(Y, B)\), stating the order. We could also have defined *Y* and *B* specifically as *Y* = “yellow marble on first pick” and *B* = “blue marble on second pick”. Then we can call this compound event “*Y* and *B*“. But then we should properly state the formula as \(P(A \text{ and } B) = P(A) \cdot P(B | A)\), explicitly indicating that the second factor is not the probability of *B* in general, but of *B* occurring, **given that** *A* occurred.

This is a very different number from what we would get if the events were independent, that is if the sample space remained the same because we put the first marble we picked back into the box. Then: P(Yellow) = 1/6 P(Blue) = 1/6 P(Y,B) = 1/6 * 1/6 = 1/36 So the trick is to figure out ahead of time if the events are independent or dependent, and then use the formula: P(A,B) = P(A) * P(B)

This is selection *with* replacement; here, the events are independent, and *P*(*B*) was in fact the probability of *B* regardless of whether *A* had occurred.

In this entire discussion, we are thinking of independence as something we can determine ahead of time, from our knowledge of what is being done (the “experiments”): tossing two unrelated things vs. selecting without replacement. We’ll see examples later where we determine independence in a different way, based on numbers only.

## Dice and cards

How about some more examples? This is from 2003:

Independent vs. Dependent Events What is the difference between "independent" and "dependent" events? This is from the probability of a compound event. Could you give me an example?

Doctor Ian answered concisely:

Hi Sonny, Two events areindependentif the outcome of one has no effect on the outcome of the other. The classic example would berolling a pair of dice. What happens with one die has no effect on what happens with the other die. Two events aredependentif the outcome of one has an effect on the outcome of the other. The classic example would bedrawing cards from a deck without replacement. The probability of drawing an ace changes depending on what other cards have already been drawn. Probabilities for independent events often involveexponents, while probabilities for dependent events often involvefactorials. How many ways are there to roll three dice? There are 6 ways to roll the first, 6 ways to roll the second, and 6 ways to roll the third, so the number of possible outcomes is 6*6*6 = 6^3 How many ways are there to draw three cards from a deck without replacement? There are 52 ways to draw the first one; but now there are only 51 ways to draw the second (because one card has been removed); and only 50 ways to draw the third. So the number of possible outcomes is 52 * 51 * 50 = 52! / (52 - 3)!

Here, nothing has been said about probability, only about outcomes (the sample space). The exponent is a result of multiplying repeatedly by the same thing (because the probability remained the same on each repetition), while the factorial is a result of multiplying by a number that repeatedly decreases, as the sample space is reduced. This only applies when we are repeating the same event, rather than doing two different things.

## Independent vs. mutually exclusive

Many students confuse independence with mutual exclusivity, because they sound superficially similar. Consider this question from 2007:

Independent and Mutually Exclusive Events I am confused about independent and mutually exclusive events. Can you please provide a practical example of each? If A and B are independent then it means that the occurrence of one event has no effect on the occurrence of the other event.I think mutually exclusive also means the same. Is that right?

This time, Doctor Pete answered:

Hi TR, If two events A and B areindependent, then Pr[A and B] = Pr[A]Pr[B]; that is, the probability that both A and B occur is equal to the probability that A occurs times the probability that B occurs.

He is using a slightly different notation, with Pr in place of P, both meaning probability. The fact stated here is one that we saw above; but here, no explanation has been given. From this perspective, we can take the formula as a **definition** of independent events. (More on that next time.)

If A and B aremutually exclusive, then Pr[A and B] = 0; that is, the probability that both A and B occur is zero. Clearly, if A and B are nontrivial events (Pr[A] and Pr[B] are nonzero), then they cannot be both independent and mutually exclusive.

The actual definition of “mutually exclusive” is in the words: each event **excludes** the other. (Mutual means “each other”.) That is, if one happens, the other can’t; they can’t both happen. Therefore, the probability of both happening is zero.

A real-life example is the following. Consider a fair coin and a fair six-sided die. Let event A be obtaining heads, and event B be rolling a 6. Then we can reasonably assume that events A and B are independent, because the outcome of one does not affect the outcome of the other. The probability that both A and B occur is Pr[A and B] = Pr[A]Pr[B] = (1/2)(1/6) = 1/12. Since this value is not zero, then events A and B cannot be mutually exclusive.

This example illustrates the fact previously mentioned, that “nontrivial events” that are independent **can’t** be mutually exclusive: if \(P(A)\cdot P(B) = 0\), then at least one of the probabilities would have to be zero (a “trivial” event). So far from being the same, “independent” and “mutually exclusive” are, themselves, (almost) “mutually exclusive”.

An example of mutually exclusive events is the following: Consider a fair six-sided die as before, only in addition to the numbers 1 through 6 on each face, we have the property that the even-numbered faces are colored red, and the odd-numbered faces are colored green. Let event A be rolling a green face, and event B be rolling a 6. Then Pr[A] = 1/2 Pr[B] = 1/6 as in our previous example. But it is obvious that events A and B cannot simultaneously occur, since rolling a 6 means the face is red, and rolling a green face means the number showing is odd. Therefore Pr[A and B] = 0.

This is an example with two events defined by the same action (rolling the one die). A simpler (but more obvious, and therefore less instructive) compound event would be “roll one number that is both a 6 and a 5”.

Could we have two such events that are in fact independent? Yes! Define event *A* as rolling an even number (a red face in the example above), and *B* as rolling a number greater than 4. Then

\(P(A) = 3/6 = 1/2\)

\(P(B) = 2/6 = 1/3\)

\(P(A \text{ and } B) = P(\text{even and greater than 4}) = P(6) = 1/6\)

This shows that the number being even doesn’t affect the probability of being greater than 4, because just as half of the numbers 1, 2, 3, 4, 5, 6 are even, so are half of the numbers 5, 6. The first event does affect the possible *outcomes* of the second event, but not the *probability*. So these events are independent, though it was not initially obvious as in the case of two dice.

Therefore, we see that a mutually exclusive pair of nontrivial events are also necessarily dependent events. This makes sense because if A and B are mutually exclusive, then if A occurs, then B cannot also occur; and vice versa. This stands in contrast to saying the outcome of A does not affect the outcome of B, which is independence of events.

We could say that mutually exclusive events are *as far from independent as possible*; the fact that event *A* occurs has a *huge* effect on the probability that *B* occurs, forcing it to be zero!

## Confusion over definitions

So why does it sound to many students as it the two terms meant the same thing? Possibly they connect both terms with something like “separate” or “non-interacting”, which they are, but in *very* different ways! Independent events don’t interact in the sense of affecting one another’s probability; mutually exclusive events are “non-intersecting” (disjoint) unable to happen together.

Here is an (unarchived) question from 2008, to illustrate this thinking:

Let A be the event that a student is enrolled in an accounting course, and let B be the event that a student is enrolled in a business statistics course. It is known that 30% of all students are enrolled in an accounting course and 40% of all students are enrolled in business statistics. Included in these numbers are 15% who are enrolled in both business statistics and accounting. From this information, it can be concluded that... A & B are mutually exclusive, A & B are not independent, or A & B are complements. I'm just not sure how to go about the problem, I think they aremutually exclusive, but I'm not sure, becauseit seems as if they have nothing to do with each other, but at the same time I think they might also be dependent, because if you take bus. statsyou might also take accounting, so I don't know, please help and explain.

This gives a strong hint to the wrong thinking. I replied, in part:

It looks like you are thinking in terms of informal descriptions of the terms "mutually exclusive", "independent", and "complement", rather than their definitions. In particular,"mutually exclusive" doesn't mean "they have nothing to do with each other", but that they can't both be true at the same time; and"dependent" doesn't mean "you might take both". Consider the three definitions, which can be briefly stated this way: Events A and B aremutually exclusivewhen "A and B" is empty -- that is, they are never both true. Events A and B areindependentwhen P(A and B) = P(A) * P(B). If this is not true, they are not independent. Events A and B arecomplementswhen A = not B.

Later the same year, we got this similar question from an adult:

I am confused about whether two events can be both independent and mutually exclusive. I think it is yes. Mutually exclusive will be zero and if the events are independent thenone event has nothing to do with the other outcome.

Again, there seems to be an overly informal definition in view. I replied:

Events A and B are mutually exclusive when P(A and B) = 0. That is, they can't both happen. Note:whether A happens has a VERY strong effect on whether B happens -- it makes it impossible!Events A and B are independent when P(A and B) = P(A) * P(B). One implication of this is that one event does not affect the PROBABILITY of the other event -- it is a common misconception to express this as if "one event has nothing to do with the other", which sounds more similar to mutual exclusion than it really is. Put the two facts about P(A and B) together to see what has to be true in order for two events to be BOTH mutually exclusive AND independent. You'll see that it's a pretty special case.

The answer, as we’ve seen, is that one event must be trivial — that is, it never happens. We never see this case in textbook examples!

Pingback: More About Independent Events – The Math Doctors