The relevance of symbolic logic to other subjects



Let's consider why symbolic logic is of special interest to the philosopher. Applying the formal techniques of logic to a vague philosophical argument can help to clearly display the controversial parts of the argument because symbolic statements are free of vagueness and ambiguity.

For example, one philosopher claims that from the premise "God is loving and all-powerful" she can deduce the sentence "There shouldn't be earthquakes or murder or any other evil in the world." Some philosophers initially are likely to agree that this is a valid deduction; others are likely to disagree. One reason for the disagreement is that it's so hard to tell just what the two sentences are really saying. But if the sentences are translated into symbolic logic, then the sentences will be precise. With precise sentences it is much clearer whether the conclusion does or doesn't follow from the premises. If the conclusion doesn't follow, then it will be clearer just what else must be assumed to make the conclusion follow. Then the philosophers can concentrate on discussing whether these additional assumptions are acceptable. Therefore, the use of symbolic logic can help (and has helped) direct the philosophers' discussions toward the crucial points in their disputes.

Some philosophers believe that symbolic logic can reveal the structure of all possible good inference, and so reveal the common skeletal structure that underlies all reasonable thought processes. Bertrand Russell, Ludwig Wittgenstein, and other philosophers have argued that there is an intimate connection among these three things: predicate logic (which is the main kind of symbolic logic studied in this course), the human mind, and the deep structure of the physical world.

The symbolic analysis of our natural language can reveal exciting new information about the character of language itself. For example, can all the meaningful sentences of English, but none of the nonsense and ungrammatical ones, be generated mechanically from a small number of symbolic rules? The attempt to answer this question is an active area of contemporary philosophical research begun by Noam Chomsky at M.I.T.

Is it theoretically possible to build a machine that could sort the formally valid arguments from the invalid ones? In 1937, Alonzo Church and Alan Turing (who broke the code of the Nazis in World War II) gave a convincing proof that this is impossible, as we shall see.

Logic also impacts philosophy in other ways. Consider this seemingly good inference that has, unfortunately, an unacceptable conclusion. "Because 9 is the number of planets in our solar system, and because it is logically necessary that 9 is greater than 5, it follows by substitution that it is logically necessary that the number of planets in our solar system is greater than 5." This conclusion is not correct because the solar system might have contained fewer planets if it had evolved differently. The problem of diagnosing and correcting the error in this reasoning is an unsolved problem in philosophy.



Now let's consider why symbolic logic is of special interest to the computer scientist. One area of computer science is A.I. or artificial intelligence. An A.I. process is a process by which a computer or robot is able to perform tasks which, when they are performed by humans, require intelligence. For example, it takes intelligence to carry on a long, sensible conversation in English, and A.I. researchers hope to build a computerized-robot that does this. Researchers generally believe that making progress on this task of getting a computer to use English intelligently will require a massive introduction into the computer of knowledge about the world outside the computer. How are the researchers going to give all this knowledge to the computer so that it is available in a way that the computer can use it? This question raises the problem of knowledge representation. Many A.I. researchers believe the key to success is to translate this knowledge into symbolic logic rather than into ordinary computer languages. More specifically, any method used for representing knowledge is just an implemenetation of some subset of predicate logic.


A computer is a device that manipulates information according to a program. Computers are logic devices in two senses: their design by humans follows basic principles of symbolic logic, and their programs are also based on principles of symbolic logic. More specifically, computer science is involved with symbolic logic in the following five ways:

(1) The first programming language evolved from a formal language for symbolic logic.

(2) The electrical engineer who designs digital computers creates the machines' gates and networks according to the principles of propositional logic, that is, a two-valued Boolean Algebra. George Boole's work in his 1854 book The Laws of Thought is the common ancestor of propositional logic, of monadic predicate logic, of Boolean algebra, and of the circuit designs of digital electronic computers.

(3) Symbolic logic is useful for simplifying complicated circuits. The techniques of symbolic logic are used to prove how two circuits which seem to be quite different will nevertheless yield the same output if both are given the same input. These techniques can be used to redesign a given circuit to make it work the same way but with fewer components or with less expensive components.

(4) Symbolic logic is useful for analyzing the theoretical limits of ideal digital computers. Symbolic logic techniques can be used to establish what functions a computer can and cannot compute (in principle, that is, with no limits on the size of memory or the amount of time available). The techniques can be used also to establish limitations of speed for certain kinds of calculations, and to establish whether a computer program will in principle correctly do what its programmer intends to have designed it to do.

(5) Symbolic logic techniques are used in automated reasoning programs.



Symbolic logic is of special interest to the mathematician because it is a useful tool for revealing structure. The notion of structure is the key notion of modern mathematics. For example, isomorphism is identity between structures. 

Some people may have the idea that a number is an entity with some mysterious features hidden behind a symbol, a proper name, like "7". But according to modern mathematics an object is nothing else than its relations with other objects.... To identify an object is to identify a structure or ... class of structures. (Jean-Yves Beziau).

Augmented by some principles of set theory, every mathematical statement is capable of being expressed without significant loss of its content as a statment of symbolic logic in a way that can helpfully reveal its structure. Also, the proofs and theorems of any field of mathematics can be translated into proofs and theorems of logic. When fields of mathematics are represented this way as a part of logic, the logician can more clearly see the extent of that field of mathematics and see its presuppositions.

After translating a mathematical theory into symbolic logic it is much easier to establish the answers to such questions as "Will this theory permit the deduction of a contradiction?" and "Could there be a machine which could in principle always correctly decide whether a statement of this theory is true?"

The automatic theorem-proving procedures of the logicians can be (and have been) applied to discover some new theorems of mathematics that the mathematicians working alone never discovered, but which are interesting and worth publishing.