About the title

About the title

I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).

The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.



I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.



Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.
Showing posts with label The Fermat Last Theorem. Show all posts
Showing posts with label The Fermat Last Theorem. Show all posts

Wednesday, April 3, 2013

What is mathematics?

Previous post: D. Zeilberger's Opinions 1 and 62.


This is a reply to a comment by vznvzn to a post in this blog.


I am not in the business of predicting the future. I have no idea what people will take seriously in 2050. I do not expect that Gowers’s fantasies, or yours, which are nearly the same, will turn into reality. I wouldn’t be surprised that the humanity will return by this time to the Middle Ages, or even to the pre-historical level. But I wouldn't bet on this even a dollar. At the same time mathematics can disappear very quickly. Mathematics is an extremely unusual and fragile human activity, which appeared by an accident, then disappeared for almost a thousand years, then slowly returned. The flourishing of mathematics in 20th century is very exceptional. In fact, we already have much more mathematicians (this is true independently of which meaning of the word “mathematician” one uses) than society, or, if you prefer, humanity needs.

The meaning of words “mathematics”, “mathematician” becomes important the moment the “computer-assisted proofs” are mentioned. Even Gowers agrees that if his projects succeeds, there will be no (pure) mathematicians in the current (or 300, or 2000 years old) sense. The issue can be the topic of a long discussion, of a serious monograph which will be happily published by Princeton University Press, but I am not sure that you are interested in going into it deeply. Let me say only point out that mathematics has any value only as human activity. It is partially a science, but to a big extent it is an art. All proofs belong to the art part. They are not needed at all for applications of mathematics. If a proof cannot be understood by humans (like the purported proofs in your examples), they have no value. Or, rather, their value is negative: a lot of human time and computer resources were wasted.

Now, a few words about your examples. The Kepler conjecture is not an interesting mathematical problem. It is not related to anything else, and its solution is isolated also. Physicists had some limited interest in it, but for them it obvious for a long time (probably, from the very beginning) that the conjectured result is true.

4 colors problem is not interesting either. Think for a moment, who may care  if every map can be colored by only 4 colors? In the real word we have much more colors at our disposal, and in mathematics we have a beautiful, elementary, but conceptual proof of a theorem to the effect that 5 colors are sufficient. This proof deserves to be studied by every student of mathematics, but nobody rushed to study the Appel-Haken “proof” of 4-colors “theorem”. When a graduate student was assigned the task to study it (and, if I remember correctly, to reproduce the computer part for the first time), he very soon found several gaps. Then Haken published an amusing article, which can be summarized as follows. The “proof” has such a nature that it may have only few gaps and to find even one is extremely difficult. Therefore, if somebody did found a gap, it does not matter. This is so ridiculous that I am sure that my summary is not complete. Today it is much easier than at that time to reproduce the computer part, and the human part was also simplified (it consists in verifying by hand some properties of a bunch of graphs, more than 1,000 or even 1,500 in the Appel-Haken “proof”, less than 600 now.)

Wiles deserved a Fields medal not because he proved LFT; he deserved it already in 1990, before he completed his proof. In any case, the main and really important result of his work is not the proof of the LFT (this is for popular books), but the proof of the so-called modularity conjecture for nearly all cases (his students later completed the proof of the modularity conjecture for the exceptional cases). Due to some previous work by other mathematicians, all very abstract and conceptual, this immediately implies the LFT. Before this work (mid-1980ies), there was no reason even to expect that the LFT is true. Wiles himself learned about the LFT during his school years (every future mathematician does) and dreamed about proving it (only few have such dreams). But he did not move a finger before it was reduced to the modularity conjecture. Gauss, who was considered as King of Mathematics already during his lifetime, was primarily a number theorist. When asked, he dismissed the LFT as a completely uninteresting problem: “every idiot can invent zillions of such problems, simply stated, but requiring hundreds years of work of wise men to be solved”. Some banker already modified the LFT into a more general statement not following from the Wiles work and even announced a monetary prize for the proof of his conjecture. I am not sure, but, probably, he wanted a solution within a specified time frame; perhaps, there is no money for this anymore.


Let me tell you about another, mostly forgotten by now , example. It is relevant here because, like the 3x+1 problem (the Collatz conjecture), it deals with iterations of a simple rule and by another reason, which I will mention later. In other words, both at least formally belong to the field of dynamical systems, being questions about iterations.

My example is the Feigenbaum conjecture about iterations of maps of an interval to itself. Mitchell Feigenbaum is theoretical physicist, who was lead to his conjecture by physical intuition and extensive computer experiments. (By the way, I have no objections when computers are used to collect evidence.) The moment it was published, it was considered to be a very deep insight even as a conjecture and as a very important and challenging conjecture. The Feigenbaum conjecture was proved with some substantial help of computers only few years later by an (outstanding independently of this) mathematical physicists O. Lanford. For his computer part to be applicable, he imposed additional restrictions on the maps considered. Still, the generality is dear to mathematicians, but not to physicists, and the problem was considered to be solved. In a sense, it was solved indeed. Then another mathematician, D. Sullivan, who recently moved from topology to dynamical systems, took the challenge and proved the Feigenbaum conjecture without any assistance from the computers. This is quite remarkable by itself, mathematicians usually abandon problem or even the whole its area after a computer-assisted argument. Even more remarkable is the fact that his proof is not only human-readable, but provides a better result. He lifted the artificial Lanford’s restrictions.

The next point (the one I promised above) is concerned with standards Lanford applied to the computer-assisted proofs. He said and wrote that a computer-assisted proof is a proof in any sense only if the author not only understands its human-readable part, but also understands every line of the computer code (and can explain why this code does that is claimed it does). Moreover, the author should understand all details of the operations system used, up to the algorithm used to divide the time between different programs. For Lanford, a computer-assisted proof should be understandable to humans in every detail, except it may take too much time to reproduce the computations themselves.

Obviously, Lanford understood quite well that mathematics is a human activity.

Compare this with what is going on now. People freely use Windows OS (it seems that even at Microsoft nobody understands how and what it does), and the proprietary software like Mathematica™, for which the code is a trade secret and the reverse engineering is illegal. From my point of view, this fact alone puts everything done using this software outside not only of mathematics, but of any science.


Next post: To appear.