About the title

About the title

I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).

The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.



I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.



Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.

Friday, April 5, 2013

The conceptual mathematics vs. the classical (combinatorial) one.

Previous post: Simons's video protection, youtube.com, etc.

This post is an attempt to answer some questions of ACM in a form not requiring knowledge of Grothendieck ideas or anything simlilar.

But it is self-contained and touches upon important and hardly wide known issues.

--------------------------------------------


It is not easy to explain how conceptual theorems and proofs, especially the ones of the level close to the one of Grothendieck work, could be at the same time more easy and more difficult at the same time. In fact, they are easy in one sense and difficult in another. The conceptual mathematics depends on – what one expect here? – on new concepts, or, what is the same, on the new definitions in order to solve new problems. The hard part is to discover appropriate definitions. After this proofs are very natural and straightforward up to being completely trivial in many situations. They are easy. Classically, the convoluted proofs with artificial tricks were valued most of all. Classically, it is desirable to have a most elementary proof possible, no matter how complicated it is.

A lot of efforts were devoted to attempts to prove the theorem about the distribution of primes elementary. In this case the requirement was not to use the theory of complex functions. Finally, such proof was found, and it turned out to be useless. Neither the first elementary proof, nor subsequent ones had clarified anything, and none helped to prove a much more precise form of this theorem, known as Riemann hypothesis (this is still an open problem which many consider as the most important problem in mathematics).

Let me try to do this using a simple example, which, perhaps, I had already mentioned (I am sure that I spoke about it quite recently, but it may be not online). This example is not a “model” or a toy, it is real.

Probably, you know about the so-called Fundamental Theorem of Calculus, usually wrongly attributed to Newton and Leibniz (it was known earlier, and, for example, was presented in the lectures and a textbook of Newton's teacher, John Barrow). It relates the derivatives with integrals. Nothing useful can be done without it. Now, one can integrate not only functions of real numbers, but also functions of two variables (having two real numbers as the input), three, and so on. One can also differentiate functions of several variables (basically, by considering them only along straight lines and using the usual derivatives). A function of, say, 5 variables has 5 derivatives, called its partial derivatives.

Now, the natural question to ask is if there is an analogue of the Fundamental Theorem of Calculus for functions of several variables. In 19th century such analogues were needed for applications. Then 3 theorems of this sort were proved, namely, the theorems of Gauss-Ostrogradsky (they discovered it independently of each other, and I am not sure if there was a third such mathematician or not), Green, and Stokes (some people, as far as I remember, attribute it to J.C. Maxwell, but it is called the Stokes theorem anyhow). The Gauss-Ostrogradsky theorem deals with integration over 3-dimensional domains in space, the Green theorem with 2 dimensional planar domains, and the Stokes theorem deals with integration over curved surfaces in the usual 3-dimensional space. I hope that I did not mix them up; the reason why this could happen is at the heart of the matter. Of course, I can check this in moment; but then an important point would be less transparent.

Here are 3 theorems, clearly dealing with similar phenomena, but looking very differently and having different and not quite obvious proofs. But there are useful functions of more than 3 variables. What about them? There is a gap in my knowledge of the history of mathematics: I don’t know any named theorem dealing with more variables, except the final one. Apparently, nobody wrote even a moderately detailed history of the intermediate period between the 3 theorems above and the final version.

The final version is called the Stokes theorem again, despite Stokes has nothing do with it (except that he proved that special case). It applies to functions of any number of variables and even to functions defined on so-called smooth manifolds, the higher-dimensional generalization of surfaces. On manifolds, variables can be introduced only locally, near any point; and manifolds themselves are not assumed to be contained in some nice ambient space like the Euclidean space. So, the final version is much more general. And the final version has exactly the same form in all dimension, but the above mentioned 3 theorems are its immediate corollaries. This is why it is so easy to forget which names are associated to which particular case.

And – surprise! – the proof of general Stokes theorem is trivial. There is a nice short (but very dense) book “Calculus on manifolds” by M. Spivak devoted to this theorem.  I recommend reading its preface to anybody interested in one way or another in mathematics. For mathematicians to know its content is a must. In the preface M. Spivak explains what happened. All the proofs are now trivial because all the difficulties were transferred into definitions. In fact, this Stokes theorem deals with integration not of functions, but of the so-called differential form, sometimes called also exterior forms. And this is a difficult notion. It requires very deep insights to discover it, and it still difficult to learn it. In the simplest situation, where nothing depends on any variables, it was discovered by H. Grassmann in the middle of 19th century. The discoveries of this German school teacher are so important that the American Mathematical Society published an English translation of one of his books few years ago. It is still quite a mystery how he arrived at his definitions. With the benefits of hindsight, one may say that he was working on geometric problems, but was guided by the abstract algebra (which did not exist till 1930). Later on his ideas were generalized in order to allow everything to depend on some variables (probably, E. Cartan was the main contributor here). In 1930ies the general Stokes theorem was well known to experts. Nowadays, it is possible to teach it to bright undergraduates in any decent US university, but there are not enough of such bright undergraduates. It should be in some of the required course for graduate students, but one can get a Ph.D. without being ever exposed to it.

To sum up, the modern Stokes theorem requires learning a new and not very well motivated (apparently, even the Grassmann did not really understood why he introduced his exterior forms) notion of differential forms and their basic properties. Then you have a theorem from which all 19th century results follow immediately, and which is infinitely more general than all of them together. At the same time it has the same form for any number of variables and has a trivial proof (and the proofs of the needed theorems about differential forms are also trivial). There are no tricks in the proofs; they are very natural and straightforward. All difficulties were moved into definitions.

Now, what is hard and what is difficult? New definitions of such importance are infinitely rarer than new theorems. Most mathematicians of even the highest caliber did not discover any such definition. Only a minority of Abel prize winner discovered anything comparable, and it is still too early to judge if their definitions are really important. So, discovering new concepts is hard and rare. Then there is a common prejudice against anything new (I am amazed that it took more than 15 years to convince public to buy HD TV sets, despite they are better in the most obvious sense), and there is a real difficulties in learning these new notions. For example, there is a notion of a derived category (it comes from the Grothendieck school), which most of mathematicians consider as difficult and hardly relevant. All proofs in this theory are utterly trivial.

Final note: the new conceptual proofs are often longer than the classical proofs even of the same results. This is because in the classical mathematics various tricks leading to shortcut through an argument are highly valued, and anything artificial is not valued at all in the conceptual mathematics.



Next post: The Hungarian Combinatorics from an Advanced Standpoint.