Evaluating Rationales II


Objective and Subjective Evaluation of Rationales

The basic concepts we use to describe the structure of rationales (deductive validity, truth, inductive strength and reliability) are essential to providing an objective basis for evaluating rationales as well.  In fact, what we mean when we say that we aim at an objective evaluation of rationales is that we aim at assessing their validity, truth, inductive strength and reliability.  Rationales can be evaluated with other aims in mind. For example, we might evaluate a rationale with respect to how it makes people feel. But the fact that an argument for a particular conclusion might be offensive to someone, or give someone else the warm fuzzies, is of no particular interest from a logical point of view.  Even questions about how effective an argument is at convincing people of its conclusion or how effective an explanation is at making people feel like they understand the conclusion isn't what logicians are after.  (These are fascinating psychological questions, however.  Cognitive and evolutionary psychologists study them intensely.)

The sorts of considerations that don't interest us from a logical point of view are what some would call subjective considerations. Although this is a perfectly legitimate way to use the term, it conveys the impression that there are no subjective aspects to logical evaluation, and this isn't so.  In fact, an important part of logical evaluation is, in a certain technical and perfectly objective sense, subjective.  It's important to understand why this is so.

In the previous section we discussed the concept of probability a little bit, and now we're gong to extend this discussion a little bit more. We have already noted that our cognitive attitude toward particular statements is not typically one of complete acceptance or complete rejection;  rather, we believe statements to some degree.  This is one of the basic ideas behind what is known as Bayesian analysis. The degree to which you believe something is a perfectly objective fact about you, but it is subjective in the sense that it is a fact about your mind, not an indication of whether this belief is actually true or false. The degree to which you believe a statement is what Bayesian probability theorists call your subjective probability.

The basic idea behind Bayesian analysis is that different people can and often do attach different subjective probabilities to the same statement.  So, for example, if Vicky and Emrys are hiking in the woods, a disagreement between them as to whether they are lost is not just a matter of Vicky being completely sure they are lost and Emrys being completely sure they are not.  Rather, it is a matter of, say, Vicky being 85% sure they are lost and Emrys being only 25% sure that they are lost, or in other words 75% sure that they are not. This means that when Vicky and Emrys encounter some evidence that they are not lost after all, say, a familiar looking tree, it will not have the effect of putting them both on the same page with respect to their lostness.  They will both count the familiar looking tree as evidence that they are not lost, but because they began with different subjective probabilities regarding their lostness, Emrys will result in being more convinced of this than Vicky.

The moral of this story is that while we might all bring different initial subjective probabilities to the table, the fact that we all buy into the same basic set of principles for revising our subjective probabilities in accord with new information means that these subjective probabilities will eventually converge to something very close to complete agreement. (This can all be demonstrated nicely in the probability calculus, but we are going to forego that demonstration here.)  In the above example, if Vicky and Emrys start getting more and more evidence that they are not lost (i.e., more and more familiar sights, culminating in the sight of home itself) then they will both keep revising their confidence levels until they are basically in agreement.

So that is what we mean when we say that there is a subjective element to evaluating rationales. It is important to see that it doesn't imply any kind of radical relativism or logical anarchy when it comes to the evaluation of rationales. We have different subjective starting places, but if we accept the same principles for revising our beliefs and we are exposed to the same basic information, then we will eventually come to rough agreement about the objective state of the world.  Cool, huh?

The Objective Evaluation of Rationales

In the previous section we learned one way not to evaluate rationales, i.e.; we can not refute a rationale by the simple expedient of showing that it relies on a principle that has some exceptions.  We're now going to use the basic insights of Bayesian analysis (though not Bayesian analysis itself) to articulate an approach to evaluating rationales. As indicated above, the evaluation of rationales has an objective and a subjective component.  In this section we will deal with the objective component.

The objective evaluation of a rationale is a matter of determining the logical relation between the premises and the conclusion.  The deductive validity of a rationale and the inductive strength of a rationale are both traditional objective measures of logical value.  Beyond validity and strength, the value of a rationale depends on the reliability of the principle and our degree of confidence in the reason given.  In the ideal case the principle will be perfectly reliable and we will be perfectly confident in the reason given.  The closest we can get to something like that is a rationale about mathematical or logical relationships.

Example 1

This is the kind of argument that logicians and mathematicians call a proof.  The reason they call it a proof is that it is based on one of the most basic principles of logic and mathematics.  P1 and P2 are both versions of what's known as the transitivity of identify.  The reasons, too, seem to be beyond any actual doubt because they don't really depend on anything but the meanings of the terms "odd" and "even".  So unless we are all somehow confused about what these terms mean, which is hard to imagine, the argument seems to be beyond any possible doubt  This is one of those situations where we can say with 100% confidence that the reasons are true and the principles are perfectly reliable, and hence the conclusion is guaranteed. (Do you agree?  Would you bet your life to get one dollar, right now, that this conclusion is correct?)

But once we stray just a little bit into the real world, everything changes.

Example 2

On the surface, it would seem that the principle involved in this explanation is just as reliable as the principles used in Example 1, but that's not quite right.  This is a very, very reliable principle, and it rests on basic principles of arithmetic, just like Example 1.  But it is not just a principle of arithmetic.  It is a principle concerning what happens when you do something in the physical world, namely, make a withdrawal from an account.  An account is not a number.  And a withdrawal is not just an arithmetical operation. These are physical processes, i.e., they involve the actions of computers and people.  Because these activities will (rarely) be performed and recorded erroneously, the principle does not have the absolute reliability of a mathematical principle, and the reason does not have the absolute reliability of a mathematical definition. Occasionally, though very, rarely, a mistake will be made in the processing of a withdrawal. Hence, while Lois' explanation is as reliable a real world explanation as we are ever going to find, it's conclusion is not guaranteed.

Straying still a little further into reality we may arrive once again at our cauldron of jelly beans.  Recall that it contains millions of jelly beans that we have experimentally determined to be 90% cinnamon and 10% French vanilla, though we've never actually looked inside. Suppose we complicate our jelly bean selection procedure as follows.

Let's say that if you roll a six, then you don't get to pick  bean, otherwise you do.  So, how confident should you be in the following statement.

The answer depends on two things:

Here is one way to represent the argument  that you will eat a cinnamon jelly bean.

The reliability of P1 is .90 because the chance that the jelly bean will be cinnamon if you pick one is .9  The probability that you will pick a bean at all is the probability that you will roll something other than a six.  Since there are five other possibilities, and all the sides of the die are equally likely, probability that you will get to choose a bean is 5/6=.83.  Now, since the the roll of the die and the selection of the bean are two independent events, the likelihood that both will occur is, according to probability theory, .83 x.90=.75.  So, the answer is that you should be 75% confident that you will get a read jelly bean.  That's still a pretty good chance, but perhaps a little less than you might have guessed. 

Most of the things we reason about are more complicated and far less well understood than a thoroughly sampled cauldron of jelly beans and the roll of a die.  But this example still illustrates something important about real life reasoning, and that is that even when we are pretty confident in our reason and have the use of a pretty strong principle, the conclusion will turn out to be quite a bit less certain than both. For example, if a rationale employs a principle that is 95% reliable, and a reason we are 95% confident is true, then the degree of confidence we should have in the corresponding conclusion is .95 x .95 = .90. 

Many people would say that that a 90% confidence level is simply not adequate for belief.  If someone who lies about 10% of the time tells you something, you probably aren't going to automatically believe her.  In fact, you would probably regard someone who lies that often as a habitual liar, and rarely believer her at all. But in the end, it's important to see that a huge amount of our everyday reasoning fails to give us anything like 90% confidence in the conclusion, and we're not in a position to reject such reasoning as if it were simply a pack of lies.  So, how should we proceed when confronted with something like the following?

Example 3

This is the sort of reasoning we have to evaluate on an everyday basis.  You can easily imagine the ensuing conversation.

Notice, that in very short order Jess has exposed both that she does not have high confidence in the reason, and that the principle is far from reliable.  Even if she attaches, say, an 80% confidence level to the reason and 80% reliability to the principle, she would have to be only 64%confident in the conclusion. Most logicians would tell you that makes it a pretty crappy argument.  But should Jess simply reject the conclusion outright?  That would be stupid.  Think about this for a second.  Five minutes ago, if someone had asked Jess if some randomly selected guy on campus liked her, she would have said no with enormous confidence, since it is highly unlikely that such a person would know her at all. So, basically her confidence in the proposition: "That French guy likes me." has gone from almost 0% to 64% in the blink of an eye.  A 64% confidence level is not enough for outright belief, but the argument should certainly be permitted to have some effect on how Jess sees the world.  For example, it should not totally surprise her now if the French guy starts displaying overt interest in her.

Still, Jess did point out something important, and that is that Sasha seems to be too confident in her conclusion given the reason and the corresponding principle.  This is a legitimate criticism, and we will formalize these points as follows.

Weak Principle

The most important thing to understand about this kind of criticism is that it does not require a principle to be highly reliable in order to be logically useful.  We only require that our confidence in the conclusion be proportional to the reliability of a principle.

Weak Reason

Here. too, the most important thing to understand is that we do not require any particular degree of confidence in the conclusion.  We require only that (a) the expressed degree of confidence in the reason be appropriate and (b) the expressed degree of confidence in the conclusion be proportional to the appropriate degree of confidence in the reason.

Both the reliability of the principle and our confidence in the reason determine the proper level of confidence in the conclusion.  So it will often not be possible to blame weakness in a rationale on a weak principle or a weak reason alone. 

Example 4

Identification.  This is a weak rationale because the principle is fallible and therefore can not justify the absolute confidence with which the conclusion is expressed.  Even someone who is very good at facial recognition can not rule out the possibility that the witness simply looks very much like the individual who robbed the drugstore. 

(Question: Can this criticism of a Weak Principle be accused of committing the error of Exceptional Refutation?)

Example 5

Identification:  This is a weak rationale because, while the principle is reasonably strong, the reason is weak. Linus gives little in the way of independent evidence that the cashier hates black people.  It is possible that she does, but it is at least as plausible that the cashier just doesn't like the way Blake and Linus dress or  behave.

This example is worth reflecting on.  This is an explanation, and a very plausible sounding one because the principle is quite reliable.  But it is actually not at all difficult to give plausible sounding explanations for which one does not have even a shred of evidence.  Here is another.

Example 6