CHAPTER FOUR: FOURS

IV. 3. EXCURSUS: DEMONSTRATION AND PROOF

ARTIFICIAL INTELLIGENCE -- NUMBER-CRUNCHING

THE SOURCE OF CONFUSION

THE QUEST FOR BIG N


IV. 3. EXCURSUS: DEMONSTRATION AND PROOF

Let us take an example from number theory sometimes called "slicing a pancake." The problem is to get the maximum number of pieces of pancake with the minimum number of slices. One slice can only yield two pieces. Two parallel slices only produce three pieces; but if they cross, they can yield four pieces. None of this requires belief: with demonstrable certainty we know that the maximum number of pancake pieces we can get with a single slice is two; with two slices of the pancake, four; and with four slices, eleven. If anyone does not believe the consequences of our slicing the pancake--no matter how many times we may slice it, there is little enough to be done about it except to begin again with a new pancake and to proceed by repeating the demonstration, step by step, slice by slice, counting the pieces as we cut. There is no way that any one can ever get five or more pieces of a pancake with only two (straight) slices; nor can one get more pieces than eleven, with slices four.

This so-called "pancake" is an abstract two-dimensional circular figure, and not really a four-dimensional pancake, hot from the griddle, crowned with a pad of melting butter and anointed with maple syrup. So obviously we can't split the "pancake" with a slice parallel to our plate, nor fold it over, since it is supposed to be a plane, two-dimensional, discoidal figure; just regular slices are allowed. Proceeding thusly, such truths can be illustrated certainly and convincingly on one's morning breakfast plate. Three slices yield a maximum of seven pieces; and inexorably, the maximum number of pieces obtainable with four slices of the pancake is eleven. These numbers are not a matter of anyone's beliefs or opinions. They are necessarily true, provided (and this is important) that our demonstration follows all of the rules. And the truth so established has the same rigor and verity as a demonstrated consequence of ordinary school algebra with x's and y's, after which we are permitted to write Q.E.D., standing for quod erat demonstrandum, or "this has been demonstrated."

[ See, N.J.A. Sloane, A Handbook of Integer Sequences, p. 20, and sequence 391 on page 59. The sequence, also called "central polygonal numbers" has the general formula N (N - 1) / 2 + 1; or, 1/2 n (n + 1) + 1. The sequence continues: 5 slices yield 16, then 22, 29, 37, 46, 56, 67, 79, 92, 106, 121, and 137 pieces from 16 slices, and so forth.

THE TANTRIC PANCAKE

The word "demonstration," if it is to be understood with the same force and clarity as in the technical or mathematical sense, must be differentiated from those approximations suggested by current, casual uses of the word. We may gain some appreciation of the complete and utter totality possible for a demonstrable experience-- to complement the "breakfast plate" view of things being cut up into pieces--by reflecting upon the meaning of a "demonstration" from the wholistic viewpoint of certain high Tantric teachings according to the Vajrayana tradition of Tibetan Buddhism. In the practicing tradition of Tantric Buddhism, at the level of the ninth yana--the final stage of the path, which is called maha ati, or ati yoga--one is in a sense at both the beginning and the end, where the view is characterized by a panoramic, global perspective with enormous space and total openness.

Of course, in maha ati there is warmth, there is openness, there is penetration--all those things are there. But if we begin to divide the dharma [Sanskrit = truth], cutting it into little pieces as we would cut a side of beef into sirloin steaks, hamburger, and chuck, with certain cuts of beef more expensive than others, then the dharma is being marketed....The maha ati level is necessary in order to save the dharma from being parcelled and marketed; that is, it is necessary to preserve the wholesomeness of the whole path....

There is a children's story about the sky falling, but we do not actually believe that such a thing could happen. The sky turns into a blue pancake and drops on our head--nobody believes that. But in maha ati experience, it actually does happen. There is a new dimension of shock, a new dimension of logic. It is as though we were furiously calculating a mathematical problem in our notebook, and suddenly a new approach altogether dawned on us, stopping us in our tracks. Our perspective becomes completely different....

Our ordinary approach to reality and truth is so poverty stricken that we don't realize that the truth is not one truth, but all truth....There are all sorts of philosophical, psychological, religious, and emotional tactics that we use to motivate ourselves, which say that we can do something but nobody else can. Since we think we are the only one who can do something, we crank up our machine and we do it. And if it turns out that someone else has done it already, we begin to feel jealous and resentful. In fact, the dharma has been marketed or auctioned in that way. But from the point of view of ati, there is "all" dharma rather than "the" dharma. The notion of "one and only" does not apply anymore. If the gigantic pancake falls on our head, it falls on everybody's head.

In some sense it is both a big joke and a big message. You cannot even run to your next-door neighbor saying, "I had a little pancake fall on my head. What can I do? I want to wash my hair." You have nowhere to go. It is a cosmic pancake that falls everywhere on the face of the earth. You cannot escape-- that is the basic point. From that point of view, both the problem and the promise are cosmic.

[ Trungpa, Journey Without Goal: The Tantric Wisdom of the Buddha {Dharma Ocean Series}, Prajna Press, Boulder and London (1981), pp. 135-137. ]

SPENCER BROWN / KEYS

For our interests, the crucial distinction between demonstration and proof may be elaborated by citing an eloquent discourse on the subject by G. Spencer Brown. We offer below a slightly-edited version of remarks-- heretofore only privately published--that Spencer Brown presented to an extraordinary audience in 1973. The group included: dolphin researchers John Lilly and his late wife Toni, cybernetician Heinz von Foerster, mapper of the brain Karl Pribram, electronic publisher Clifford Barney, psychologists of altered states Charles Tart and Ram Dass (who altered his state from being Dr. Richard Alpert, to which he--at last report--has once again returned), psychiatrist George Gallagher, the Gestalt-Sufi-Buddhist Claudio Naranjo, mathematician-yogis Douglas Kelly and Ted Guinn, cataloguer of the Whole Earth Stewart Brand, the late ecologist of the mind Gregory Bateson and the late (but Perennial) Philosopher, Alan Watts, and distinguished others. The Esalen Institute setting was appropriately spectacular: a luxuriant green niche of nature, with healthy food and hot baths, above the brontobooming Big Sur surf, under a full moon at the vernal equinox.

The difference between demonstration and proof is that a demonstration is always done by the rules. A computer can do a demonstration....Whether we are giving a demonstration or a computer is demonstrating, we just follow the rules within the calculus. But where we have to prove something, we always find that we cannot find with the rules within the calculus. In other words, no computer will compute a proof.

Proof is quite different. Proof can never be demonstrated. I will give an example of proof--one which is familiar to us all, an illustration not in the Boolean form but in the common school form of the arithmetic of numbers--a very beautiful theorem and a very beautiful proof by Euclid.

The question asked: Is the number of primes infinite?

The prime numbers, as we see--and it is obvious when you think of it--as they go on, they get sparser. It is very obvious that they will, if you consider it, because every time we have a new one, we have a new divisor which is likely to hit one of the numbers we're looking for to see if it is prime. If it hits it--if it divides into it-- then it won't be prime (a strange sort of statement: the science of certainty taken in probability terms) because the more primes there are that could divide into it. So for fairly obvious reasons, as we continue in the number series, the primes get, in general, further and further apart. There are fewer and fewer of them. And what Euclid asked was, do they get so thinly scattered that in the end they stop altogether? Or does this never happen?

This is an example, now, of a mathematical theorem. To make it into a theorem, you actually give the answer, you actually state the proposition: "The number of primes is endless." You may not be certain whether it is true or not. You may still be asking the question, do they come to an end, or do they go on?

Well, to illustrate the difference between mathematical art--because it now needs an art to do the theorem, where it only used a techique, a mechanical application, to demonstrate something (and we don't need to do that ourselves as computers can do it so much better)--we will now do something that a computer can never do. Because what we are going to do is find the answer to this question, do the primes go on forever or not? We are going to find this answer quite definitely, and we are not going to find it by computation, because it cannot be found by computation.

But it can be found like this. This is the way Euclid found it. He said--supposing they come to a stop--all right, if they come to a stop then we know they are going to go on for a long time until we come to big primes, but, if they do come to a stop there will be some largest prime, call it Big N. That's it. That is the last prime, the biggest of the lot. If they come to a stop, there must be such a prime. Now, if there is such a prime--and there it is, up there--let us construct another number which looks like this: all primes, every single one of them, up to and including Big N. Right. We have made this new number by multiplying all the primes together. Now, Big N being the largest prime, this new number is made up of all the primes there are. There isn't another prime, because we have assumed that Big N is the largest.

On the hypothesis that Big N is the largest, this new number is all of the primes multiplied together. And having done this multiplication and getting the answer, we'll call the (new) answer Big M. We will take this new number, Big M, and we will add one. Now we will examine the properties of Big M Plus One.

You see, this is why arithmetic is so lovely: it's about individuals. Here is our number Big M, as an individual, and here is Big M Plus One. It is a hypothetical number. Actually, it is a nonexistent number. And this is why we can't properly speak of number as existing or as not existing, because some of them do and some of them don't.

Big M Plus One, let's examine its properties. Well, it is obviously not divisible by any other prime, including Big N, because we know that they all divide Big M. Therefore, every single prime leaves a remainder when we attempt to divide it into Big M Plus One. So Big M Plus One must either be prime, because it is not divisible by any existing prime, or if it ain't prime, then it must be divisible by a prime which is larger than Big N. Therefore, by assuming that there is a biggest prime, which we have called Big N, we have ineluctably shown that this assumption leads, absolutely without any doubt, to the construction of a larger prime which is either Big M Plus One, or some other prime larger than Big N which divides Big M Plus One.

And that is how Euclid did it. There are many other, later proofs, of course. But that is still one of the simplest and most beautiful. And the answer is absolutely certain that there is no largest prime, that they do go on forever. This cannot be done by a computer. Currently there is no computer that has done that.

We can do the prime factorials. Let us multiply the first three primes: 2 X 3 X 5 = 30, and then add one. Right, we get 31. 31 is prime, but if you go out far enough, you will find that you get one that isn't prime. But it will be divisible by a prime bigger than the largest prime you have used. Let's see if we can find one. Here, wait a minute, 211 is prime, isn't it? I'm just thinking of the prime factorial plus one: at seven, it's (2 X 3 X 5 X 7 = 210, + 1 = 211), that's prime factorial plus one. And 211 is prime as far as I know. We want a table of primes here. We multiply the next one, 11 and add one, and it comes out 2311 (2 X 3 X 5 X 7 X 11 + 1 = 2311). Is that prime?

[ It is.]

Anyway, I do assure you that if you go on long enough, getting the final factorial, adding one, you will find one that is not prime. But that doesn't matter (so far as the implications for Euclid's proof are concerned), because it will be divisible by a prime that is bigger than the biggest prime you have used to produce it.

[ March 19-20, 1973. The AUM Conference Transcript--documenting what several of the participants felt was a truly astounding performance by G. Spencer Brown/James Keys--was recorded, edited, and privately published by the present author, together with Clifford Barney. It is available on the World Wide Web at http://members.aol.com/lawsofform/.]

211 and 2311 are, of course, only two of the many aliases of Big M Plus One; and each of these is in fact prime, that is, not divisible by any other number but itself and one. In its next manifestation, however, as 30,031 (the prime factorial for 13 plus one), our "non-existent" messenger (or angel?) from the eternal realms, Big M Plus One, is composite or non-prime, thus confirming the Spencer Brown / Keys assurances. For in this case 30,031 is divisible by two primes, 59 and 509, both of them larger than the prime used to produce the number, which was 13.

If it were always prime, you would have immediately a means, a formula, for producing primes, and this we haven't got. There is no formula for producing primes except going about it the hard way and seeing that they don't divide by anything.

[ James Keys, AUM Conference, Transcript pp. 50-56. ]

ARTIFICIAL INTELLIGENCE -- NUMBER-CRUNCHING

These days people use the computer to approach such problems as determining whether or not a number is prime, in the anxious hope of avoiding what we now see is the only truly effective method: employing extensive and tedious computations in order to see if it divides by anything. We refer to such wishful- thinking ruses as number-crunching: an activity altogether different from conceiving or designing a proof; and so we do not know if this activity REALLY leads to the True.

The confusion Spencer Brown referred to has been endemic since the beginnings of modern AI (artificial intelligence) research. The team of Allen Newell and Herbert Simon, then at the Rand Corporation, in their first program called Logic Theorist, attempted to show that the new electronic computers

were more than merely "number crunchers" and...actually prove theorems taken from Whithead and Russell's Principia....And in August 1956, the Logic Theorist program actually produced on Rand's Johnniac computer the first complete proof of a theorem (Russell and Whitehead's theorem 2.01....The demonstration [sic!] that the Logical Theorist could prove [sic!] theorems was itself remarkable....However, the Journal of Symbolic Logic declined to publish an article co-authored by the Logic Theorist in which this proof was reported.

[ Howard Gardner, The Mind's New Science: A History of the Cognitive Revolution, Basic Books, New York (1987), p. 145. P. McCorduck, Machines Who Think, W. H. Freeman, San Francisco, p. 142, is cited in reference to the Journal's rejection notice and their demonstrable lack of any sense of humor. ]

This incident is noteworthy because of deliberate attempts to create--with their symbol manipulation, including list processing and detatchment methods--analogs of human thought processes, based upon the symbolic logic of Russell and Whitehead's Principia. As Howard Gardner recounts the story, supplying his own emphasis:

Newell, Simon, and their colleague [at Rand Corporation] Cliff Shaw stressed that they were demonstrating not merely thinking of a generic sort but, rather, thinking of the kind in which humans engage.

[ Gardner, The Mind's New Science, p. 147. ]

This claim was met with serious challenges and, as it transpired, the history of AI and programming in general was strongly marked by Newell and Simon's "dry" approach based on notions of the "physical symbol system" and "production systems" developed in their subsequent General Problem Solver program (1972). Even so, many in AI and related fields remain philosophically confused because of an underlying problem with the mathematical terminology itself. Further along in the transcript quoted at length above, Spencer Brown/Keys pointed to the venerable roots of this problem:

I know, as an engineer, the computer boys have vastly oversold their products by saying that they can do anything that the human mind can do and this is not so.

Newell, Simon and Shaw did not, themselves, make an explicit claim to have designed a thinking machine, but they went to great lengths to emphasize the parallels between human and machine problem solving. Enthusiasm for their earlier achievements has promoted allusions to "insight" in reference to "theorems."

For example, they reported certain moments of apparent insight as well as a reliance on executive process that coordinates the elementary operations of Logical Theorist...and selects the subproblem and theorems upon which the methods operate.

[Gardner, The Mind's New Science, p. 147 f. ]

This realm of theory, however, is pervaded by shadows, and misty areas of secrecy in which the understanding of just what computer programs are actually doing, or what they soon will be able to do, is frankly not very clear. Some reasons for this is obvious: the security surrounding research in the field of Artificial Intelligence and the fact that so much of this work is funded by entities not committed to the free dissemination of knowledge. Added to this are the intrinsic difficulties of the subject, in part because of its newness. But within a decade of Spencer Brown's memorious presentation at Esalen, an astounding advance toward theory-formation by machine has been accomplished by Douglas Lenat with his EURISKO. This work has profound implications for epistemology and for the study of inductive reasoning that so deeply characterize our quest for the True. Mr. Lenat's program, working at the task of "discovering and modifying useful new heuristics" (running for ten thousand hours) presented evidence of machine-generated theorems with some apparent qualities of "insight."

[ Douglas B. Lenat, "Theory Formation and Heuristic Search. The Nature of Heuristics II: Background and Examples," Artificial Intelligence: An International Journal, Volume 21 (1983), pp. 31-59; quote, p. 57. ]

Just a year after Lenat's publication, Paul Levinson helped to put these issues into perspective, while emphasizing the distinction between two types of Artificial Intelligence:

"auxiliary" or "augmentative" intelligence (as in mainframes extending and augmenting the social epistemological enterprise of science, and micros extending and augmenting thinking and communication on the individual level), and "autonomous" intelligence, or claims that computers/robots can or will function as self-operating entities, in independence of humans after the initial human programming. The difference between these two types of AI is akin to the difference between eyeglasses and eyes.

"Expert systems" and "human meat machines" claims for autonomous intelligence in machines will be examined and found wanting....The problem with current attempts at autonomous intelligence is that the machines in which they are situated are not alive, or no not have enough of the characteristics necessary for the sustenance of the "living" label. Put otherwise, the conclusion will be: in order to have artificial intelligence (the autonomous kind), we first must have artificial life; or: when we indeed have created artificial intelligence which everyone agrees are truly intelligent and autonomous, we'll look at theese "machines" and say: My God (or whatever)! They're alive!

[ Paul Levinson, "Artificial Intelligence and Real Life." Abstract of a talk given at the New School for Social Research, as part of the Colloquium on Philosophy and Technology, sponsored by the Polytechnic Institute of New York and the New School, (November 12, 1984). ]

THE SOURCE OF CONFUSION

Returning once again to the comments of G. Spencer Brown, who diligently and patiently strives to unravel the knotted twine of logic and inference the still binds much mathematical and programming thought in confusion about the real potential of computers:

They cannot do the most elementary things that the human mind can do. And I blame Russell and Whitehead for totally mixing up proof and demonstration. If you go through the Principia, there is not a single theorem, not one theorem...because what they call theorems are consequences. Now this is totally confused: the idea of the difference between demonstration and proof in mathematics. In fact Russell, you see, in suggesting it, completely confused them; and people have done so ever since. What he called theorems are in fact consequences; they are algebraic consequences which can be, in fact demonstrated [as shown-- demonstrated--by the Logic Theorist]. And indeed, Russell says, "These theorems"--he calls them theorems, they are consequences--"can be proved." And then he does the demonstration, and then he calls it "Dem." "Dem." is short for "demonstration."

The two words are used interchangeably, and wrongly. There is a difference, and what can be demonstrated is done within the system and can be done by computer. And what cannot be demon-strated, but may be proved, cannot be done by computer. It must have a person to do it. No computer can prove it, because it is not proved by computation. The steps of this proof, Euclid's proof, were not computational steps. The computer cannot do it because it is not computation.

[However], they had a precedent in that Euclid himself already rightly called this a theorem [that "the number of primes is endless," and rightly] calls it algebraic. His geometric consequences he called theorems; they are not. So the confusion developed right at the beginning with Euclid, who called his geometric consequences, which can be computed, "theorems." Wrongly. Euclid was the first offender. And from him: it just shows how we have copied; we have copied his error through hundreds of years.

I may be wrong, you see. My Latin--I have little of it--perhaps he was O.K. Euclid said quod erat demonstrandum, "this has been demonstrated." It is O.K. after a demonstration; it is misleading after a proof. And maybe he did not make this error, but we have. We have called them theorems when we should call them consequences. And this has been responsible for a vast system of error [which] has grown up, because a computer has been found to be able to demonstrate consequences--and all you need is the calculating facility to do this.

And consequently, the demonstration of consequences--in other words, calculations--has been confused with the proof of theorems, which is another matter altogether. Because of this confusion it has been thought that a computer, therefore, can do practically all that a man's mind can do. But it can't, because only the most minor function of a man's mind...is to compute. And we have, in fact, this tremendous emphasis--because of the confusion in mathematics, the difference between computation and actual mathematical thinking--which has lead us to believe that computers have minds and can do what we can do....Even here, what a computer can do, a man can do better if he gives himself to the problem, because he has the capacity of seeing in a way the computer never can.

[ Keys, AUM Conference Transcript, pp. 56-60. ]

Since the book which Russell and Whitehead chose to call the Principia Mathematica resides menacingly in the distant gloom of this historical confusion, it may be worthwhile tracking the mystery to the early years of this century, noting the possible influence of biographical factors in circumstances at Cambridge around 1910- 1913.

Whitehead was a man of mathematics. Russell knew the forms, but he actually had no instinctual ability in mathematics. Whitehead actually had. But Russell, being a stronger character, was able to program Whitehead, and you will see this if you examine the last mathematical work Whitehead wrote, which is called the Treatise on Universal Algebra with Applications, volume one. I asked Bertrand Russell...I said I had never been able to get volume two, and Russell said, "Oh, he never wrote it." So it's all sort of a mystery.

But the mathematical principles of algebra, in the usual complicated way are set out, including the Boolean algebras, in this volume produced in 1898 as an only edition. By that time, Russell, who was the stronger of the two characters, had got together with Whitehead to do Principia Mathematica, which nobody was ever going to digest. It was a very ostentatious title, because they had chosen the title which Newton had used for his greatest work. Incidentally--it is an extraordinary thing in the academic world, people are very silent about these things but--it was a very, very presumptuous title, I think, to take for this work.

[ Keys, AUM Conference Transcript, p.20 f. ]

Late in life, Lord Russell himself paid tribute to Laws of Form-- first published in Great Britain by George Allen and Unwin (1969), and in the United States by the Julian Press (1972)--in comments printed on the dust jacket of the U.S. edition:

In this book G. Spencer Brown has succeeded in doing what, in mathematics, is very rare indeed. He has revealed a new calculus, of great power and simplicity...that particular calculus which lets us see deeper into the nature of mathematics. Indeed, I still consider, on re-examining this book after a two-year interval, that it is a work of genius.

So too, in its way, is the American Heritage Dictionary a work of genius, particularly for its rich network of references compiled of probable (attested) and hypothetical (unattested) Indo-European roots for American English words. But unfortunately, it does not provide much help in distinguishing and clarifying either the words or the concepts of DEMONSTRATION and PROOF. The first of these (through the Latin demonstrare) is formed from DE "completely" + monstrare "to show," from monstrum "divine portent," from monere, "to warn," which words (with cognates at both MONSTRANCE and MONSTER) are from the Indo-European root men-(1), "to think." For PROOF (or the verb to PROVE) the link is made back to the Latin verb probare, "to test, demonstrate as good, from probus, good, virtuous." Here we note two points of interest: 1. that the word DEMONSTRATE is used in the definitions related to PROOF, and 2. that the idea of PROOF which we have introduced into the discussion of the True (and the False) is etymologically related to the Good. The Indo-European root cited is per-(1), the base for many prepositions and other lexical elelments meaning "through," "forward," "at" and "around." Coming at it the other way, this Dictionary gives, as its first definition for the word DEMONSTRATION, "To prove or make manifest by reasoning or evidence." At this level we are going around in circles; and a useful distinction between the two words depends upon following the conventions of a more specialized context, such as the way these words might be employed for articulating formal methodologies in the discipline of mathematics.

THE QUEST FOR BIG N

Obviously, a set of operations is very simply programmed to conduct a test of divisibility, although for a large number it may go on for a while, even when run on the largest and fastest of today'scomputational machines. One modern candidate for Big N (as it were, prime to the largest) was produced in 1984 by Harry Nelson and David Slowinsky who devised an operational shortcut for a program which they ran on a CRAY supercomputer, generating the number: 2 raised to the power of 132,049, minus one--which they say is prime. Like all of the Mersenne primes (following Euclid in Book IX of the Elements) it leads to an even perfect number if, in this case, it is multiplied by 2 raised to the power of 132,048 (i.e., two to the n, minus one, times two to the n minus one). The general form two to the n, minus one, is usually written as a Mersenne number: M sub n, or here M sub 132,409. The notation is named after the form developed by Pre Marin Mersenne,

a natural philosopher, theologian, mathematician and musical theorist, and a moving spirit of one of the most important French scientific groups of the early seventeenth century. He was a friend of Descartes, with whom he studied at Jesuit college... Fermat...and the Pascals...to whom he proposed problems concerning perfect numbers and related ideas.

The perfect numbers correspond one for one with the Mersenne primes....As long as only hand calculation was available, the discovery of Mersenne primes depended on human labor in actually making the necessary calculations, and subtle theorems that showed that only possible divisors of a certain type need be tried. The labor for large numbers was immense. Mersenne himself stated that all eternity would not be sufficient to decide if a 15- or 20-digit number were prime.

In 1814 Peter Barlow in an article in A New Mathematical and Philosophical Dictionary wrote, "Euler ascertained that 2 raised to the power of 31 minus one = 2,147,483,647 is a prime number; and this is the greatest at present known to be such, and, consequently, the last of the above perfect numbers which depends upon this, [i.e., 2 raised to the power of 30, times 2 raised to the power of 31 minus one] is the greatest perfect number known at present, and probably the greatest that ever will be discovered; for as they are merely curious without being useful, it is not likely that any person will attempt to find one beyond it." Barlow underestimated the fascination of record-breaking for mathematicians, and he could not forsee the electronic computer.

By allowing millions of calculations per second, the computer opened up vast reaches of numbers that had previously been inaccessible and allowed mathematicians to make effective use of much more powerful tests for primality. These tests decide whether n is prime by analysing the factors of either n - 1 or

n + 1.Because of their special form, Mersenne and Fermat numbers are easier to test for primality than any other forms, and all the recent record-breaking primes have been Mersenne numbers, and have automatically led to a new perfect number [ David Wells, The Penguin Dictionary of Curious and Interesting Numbers, Penguin Books (1986), p. 107- 110, where appears a list of perfect numbers, including the four known to ancient Greeks: 6, 28, 496, and 8128; see also p. 137 f. for the Mersenne numbers. ]

The number-crunchers Harry Nelson and David Slowinski have since had their big prime number exceeded by a newer and bigger candidate for Big N, 2 raised to the power of 216,091, minus one--and hence for the largest perfect number: 2 raised to the power of 216,090 times M sub 216,091 (i.e., times 2 raised to the power of 216,091, minus one)--generated by Chevron Geosciences in 1985. Even so, the French mathematician Ren Thom, featured on one of the Nova TV programs, clearly stated the standard methodological objection to computer processing: "But is it a proof?" Actually, this was in reference to a set of configurations drawn by a computer in 1976 in order to attack the Four Color Problem. But after all, as Thom protested, the computer could have made a mistake, and how would anyone know?

This underscores the need for distinctions between demonstration and proof drawn with such painstaking clarity by G. Spencer Brown/James Keys at the AUM Conference. Referring to Euclid's classic example of a mathematical approach to the True by inventing a theorem and conducting a proof, this methodological point is driven home:

No one could do it on a computer because we were not doing computation. Computation is counting in either direction, no more, no less. There is nothing more to computation than that, nothing more. Let's go through the steps again. Where's the computing? We compute nowhere. There is no computation in this proof. Not a single computation can be made, not one. The whole process is a proof. In the whole process of a proof, there is not one single computation, nothing that a computer could do. We were imagining doing a computation of a particular kind--but we weren't actually doing it, because there were no numbers to put in the places. In fact, there only could have been a computation if our number, Big N, being prime to the largest, happened to exist. Yes. If it happened to exist and if we knew what it was, then we could do this whole thing on a computer. But it doesn't happen to exist. Yet, in order to find that it doesn't happen to exist, we had to go through the imaginary steps of computing in this particular way. We were divining the answer. We were divining what had to be done by making certain deductions and seeing what they led to. This was an artistic process, not a mechanical one.

#