Causal Arguments and Causal Fallacies

We have learned that any statement of the form "X causes Y" can be represented as an explanation in which X is the reason and Y is the conclusion.  So, for example,  if someone were to say that chocolate causes baldness in women we would immediately reconstruct this as an explanation.  But we would also find ourselves asking whether this statement is actually true.  In other words, we would ask "How do you know that chocolate causes baldness in woman?"  If the speaker were to try to answer this question, he would no longer be giving us an explanation, but rather an argument in support of this explanation.  We call these causal arguments.

Good causal arguments rest on the application of two important principles: the Principle of Agreement and the Principle of Difference.

The Principle of Agreement :  If X is a common factor in multiple occurrences of Y, then X is a cause of Y.

The Principle of Difference:  If X is a difference between situations where Y occurs and situations where Y does not occur, then X is a cause of Y.
 

Both principles have intuitive applications. To see this, suppose 10 of us eat lunch together and a few hours later 3 of us become violently ill.  Of course, we immediately suspect the lunch and we immediately start trying to figure out what the 3 sick people ate that the 7 healthy people didn't.  Why do we suspect the lunch?  Simple: Because the lunch is a common factor in multiple occurrences of the illness.  And why do we look for something that the three sick people ate and the 7 healthy people didn't ?  Simple again:  Because this would be a difference between the people who got sick and the people who didn't.

Although these two principles properly inform our causal reasoning, they are really very weak and unreliable if used independently. This is because there are many common factors and many differences that are still causally irrelevant.  One common factor may be that all three of the sick people wear glasses.  A difference may be that one of the sick people is an Elvis impersonator whereas none of the seven healthy people are.  But the principles gain considerable strength if used together, and in succession.  That is, first we apply the principle of agreement to isolate a common factor, then we apply the principle of difference to determine whether the common factor is also a difference between the two groups. ( In Statistical Arguments for Causal Claims  we will see how the principle of difference and the principle of agreement inform basic experimental design.)

When we use these principles carefully our explanations are more likely to be correct, but we can't guarantee that an explanation is correct no matter how careful we are.  To do that we would have to rule out all other possible causes, which is simply impossible.  For example, suppose we successfully determine that the three sick people ate fruit tainted with botulism and that the seven healthy people did not.  We would feel sure that botulism  was the cause of their illness.  We would feel even surer if their symptoms corresponded to those of botulism, and we would feel virtually certain if an expert in botulism confirmed our diagnosis on the basis of a laboratory test.  But even with all this confirmation it is still possible that the three people who were exposed to botulism miraculously escaped infection, and were then subsequently exposed to something else that produced exactly the same symptoms. It is even possible that these similar symptoms were caused by entirely different things.  This, of course, is incredibly unlikely, but unlikely events do occur. [I once read about three people from my town who drove off a steep embankment in their pickup and were killed.  Of course, I naturally assumed that they were killed when they drove off the embankment.  But it turned out that, after miraculously surviving the initial crash, they had all climbed back up the embankment only to be hit by an oncoming  truck.  The movie Magnolia provides another example of a (purportedly true) event in which a boy is driven to commit suicide by his parents' dysfunctional marriage.  Before doing so he loaded an empty shotgun that his mother was always brandishing at his father during their arguments .  The boy jumped off the top of the building during one of his parents arguments, and would have been killed by the fall except that by sheer coincidence the fire department was on the ground that day practicing with their new net. The boy landed in the net, DOA.  Do you know why?] So, while,  no causal principle is foolproof,  some are much less foolproof than others.  The worst ones are implicated in the fallacies discussed below.

Post Hoc
    Def.: Asserting that A is a cause of B just because B occurs after A.

"Post hoc" is Latin, and short for "post hoc ergo propter hoc"  which means "after this, therefore because of this".  Put in the form of a principle:

If Y comes after X, then X caused Y.

This, is obviously a terrible principle, but something like it is implied in primitive causal reasoning.  Suppose you are furious with your mother and you walk home purposely stepping on cracks in the sidewalk.  Later that night she falls off a ladder and breaks her back.  You will feel responsible- even if you don't subscribe to the superstition- simply because of the temporal relationship.  Of course, the post hoc principle is not completely irrelevant to causal reasoning. (In other words, it is not a Red Herring to cite temporal succession as part of an argument for a causal conclusion.)  We generally understand it as a necessary condition of A causing B that B occur after A. (What would it even mean in the above example to say that your mother falling off the ladder caused you to step on the cracks?) However A coming after B is very far from a sufficient condition for causation.   In other words,  the simple fact that one thing came before another is extremely weak evidence for thinking the first is a cause of the second.

A slightly more sophisticated form of the Post Hoc fallacy occurs when we observe that B type events often (or always) follow A type events.  For example, if your sister always poops in  her diaper right after she throws a tantrum you might think that her tantrum causes her to poop.  This reasoning at least has the virtue of employing the principle of agreement:  Tantrumming  has here been identified as a common factor in multiple occurrences of pooping.  This  is stronger evidence of some kind of causal connection than if you had simply observed the phenomenon once.  However, it is still very weak evidence as it stands, since babies are pooping all the time and the principle of difference has not been employed to determine whether tantrumming is actually a difference between situations where pooping occurs and situations where it doesn't. Put differently, we don't know whether the baby poops even when she doesn't tantrum.

Reversing (or Confusing) Cause and Effect
   Def.: Claiming that A is a cause of B when the evidence suggests or is compatible with B being a cause of A.

This fallacy consists in inferring from the fact that A and B are positively correlated (i.e., that they tend to occur together) that one must be a cause of the other.  Put in the form of a principle:

If X and Y are positively correlated, then X is a cause of Y.

Repeatable positive correlations are very often evidence of some kind of causal relationship, but a simple correlation doesn't tell us anything about the nature of that relationship. So, for example, if we notice that poor job performance is positively correlated with drug abuse we don't know from that alone whether drug abuse causes the poor job performance or poor job performance causes the drug abuse.  If we notice that married couples who spend more than an hour a day in conversation tend to be happier in their marriage, we don't know whether conversation causes happy marriage or happy marriage causes conversation.  Of course, these are not the only two alternatives. (Recall the Fallacy of False Alternatives). It is possible that A causes B and B cause A. Pleasant conversation may cause happiness, which may cause more conversation, which may cause more happiness. (This is an example of what scientists and engineers call positive feedback)  There may also be some completely different cause C giving rise to both A and B.  This possibility leads us to our next fallacy.

Neglecting a Common Cause
    Def.: Claiming that multiple events have distinct causes when the evidence suggests or is compatible with all the events having the same cause.

This fallacy is usually committed when, noticing a reliable relationship (either temporal or purely correlational ) between A and B we assume that either A is a cause of B or B is a cause of A, neglecting the third possibility that A and B are both caused by something else entirely.  For example, if we notice that alcoholism is positively correlated with divorce we might assume that alcoholism causes divorce or that divorce causes alcoholism.   But, in the absence of evidence to the contrary,  it is at least as plausible to suggest that something else, viz.,  an unhappy marriage, could give rise to both alcoholism and divorce.  This fallacy also fits the example of your sister's poopy diaper:  it is quite plausible to suggest that something- e.g., constipation or indigestion - is causing both the tantrumming and the pooping.  This  fallacy is also committed when we mistakenly offer independent explanations of multiple events.  For example, if you notice that your car won't start and your headlights won't go on, you might think that your car has two distinct problems when it only has one, viz., a dead battery.

We should note here that common cause reasoning is extraordinarily important in scientific inquiry.  Most great leaps in scientific understanding ( e.g., Newton's law of universal gravitation, Darwin's theory of evolution, the geological theory of plate tectonics) depended on some pretty brilliant person suggesting a common cause for things that were previously believed to be un (or differently) related.  For example, the theory of plate tectonics explained volcanoes, earthquakes, fossil distributions, differential ages of mountain ranges, and the "jigsaw puzzle appearance" of the continents. Of course, it's a bit of a stretch to say that this discovery just amounted to pointing out a fallacy.  What geologists actually did was to develop a theory for which there was very little evidence at first.  They then went about the laborious, and utterly uncertain, task of gathering corroborating evidence for the theory and submitting it to the systematic attempts of other scientist to refute it.

Causal Determinism

Def.: Asserting or denying a causal relationship based on the fact that the proposed cause does not immediately, absolutely, or uniquely determine the effect.

Causation is often conceived as a necessary, or deterministic relation.  The idea is that whenever we say that A is a cause of B, we mean that if A occurs, B absolutely must occur.  (You will recall an analogous relationship in our definition of deductive implication:  if the premises are true the conclusion must be true.)  An even stronger form of causal determinism assumes that if A occurs B must occur immediately and never in the absence of A. So the relevant principles here are:

If X causes Y and X occurs, then Y must occur.
If X causes Y and X occurs, then Y occurs immediately after X.
If X causes Y, then Y never occurs in the absence of X.

In an effort to describe this fallacy it is sometimes claimed that causation simply is not a deterministic relation. The evidence for this comes from science. Although scientists have employed deterministic model of causation in the past, they now use a purely probabilistic model of causation that allows for something being a cause even when it does not determine its effect.  The reason for this is that they have gradually come to believe that randomness is just part of nature.  We'll never get particularly good at long range weather forecasting, for example, because what happens in weather systems is partly random.  It isn't purely random, of course.  So, with enough information we can predict reasonably well what will happen in a short period of time, but the multiplication of the random effects over time permanently prevent us from making accurate and informative long range weather predictions.  There is a problem with thinking of the fallacy in this way, however, ant that is that scientists really aren't agreed whether the randomness we observe is real, or if it just reflects the limits of our understanding.  (Perhaps you  have heard of Einstein's famous remark that "God does not shoot dice with the universe."  He didn't believe in randomness.  )  It is possible that there is nothing random about the weather, and that appearances to the contrary are due to the inherent difficulty of measuring so many complicated causal interactions with any kind of precision.

A much better way to understand the fallacy of Causal Determinism is to realize that there are two different ways of using the concept of causation.  One way is deterministic, the other way is not.  In the latter case when we say that A is a cause of B we do not mean that A absolutely determines B, but (roughly) that, all other things being equal, B is more probable when A occurs than when A does not occur.  When we are using this non deterministic interpretation of causation, we usually speak of A, not as the cause of B, but rather as a causal factor in the occurrence of B.  We can also quantify the strength or effectiveness of the causal factor.  A purely deterministic cause will have an effectiveness of 100% or 1.  A cause that produces its effect 50% of the time will have an effectiveness of .5, etc.  So the fallacy of Causal Determinism can actually be understood as a specific form of the fallacy of Equivocation.  Recall that in this fallacy  one  rejects a causal claim based on the assumption two different uses of  a word or expression mean the same thing when they don't.  In the fallacy of Causal Determinism, one rejects the claim that A is a causal factor for B on the basis of the fact that A does not causally determine B.

Here is a simple example of the fallacy of Causal Determinism in which all three principles above are implicated:  "I don't believe smoking causes cancer because (a) lots of people who smoke don't get lung cancer, (b) some people get lung cancer long after they have quit smoking, and (c) some people who have never smoked at all get lung cancer."  All of these statements are true, and they do show that smoking is not a deterministic cause.  But they do not in any way bear on the claim that smoking is a causal factor for lung cancer.