This post is an attempt to answer some questions of ACM in a form not requiring knowledge of Grothendieck ideas or anything simlilar.

But it is self-contained and touches upon important and hardly wide known issues.

--------------------------------------------

It is not easy to explain how conceptual theorems and proofs, especially the ones of the level close to the one of Grothendieck work, could be at the same time more easy and more difficult at the same time. In fact, they are easy in one sense and difficult in another. The conceptual mathematics depends on – what one expect here? – on new concepts, or, what is the same, on the new definitions in order to solve new problems. The hard part is to discover appropriate definitions. After this proofs are very natural and straightforward up to being completely trivial in many situations. They are easy. Classically, the convoluted proofs with artificial tricks were valued most of all. Classically, it is desirable to have a most elementary proof possible, no matter how complicated it is.

A lot of efforts were devoted to attempts to prove the theorem about the distribution of primes elementary. In this case the requirement was not to use the theory of complex functions. Finally, such proof was found, and it turned out to be useless. Neither the first elementary proof, nor subsequent ones had clarified anything, and none helped to prove a much more precise form of this theorem, known as Riemann hypothesis (this is still an open problem which many consider as the most important problem in mathematics).

Let me try to do this using a simple example, which, perhaps, I had already mentioned (I am sure that I spoke about it quite recently, but it may be not online). This example is not a “model” or a toy, it is real.

Probably, you know about the so-called Fundamental Theorem of Calculus, usually wrongly attributed to Newton and Leibniz (it was known earlier, and, for example, was presented in the lectures and a textbook of Newton's teacher, John Barrow). It relates the derivatives with integrals. Nothing useful can be done without it. Now, one can integrate not only functions of real numbers, but also functions of two variables (having two real numbers as the input), three, and so on. One can also differentiate functions of several variables (basically, by considering them only along straight lines and using the usual derivatives). A function of, say, 5 variables has 5 derivatives, called its

*partial*derivatives.Now, the natural question to ask is if there is an analogue of the Fundamental Theorem of Calculus for functions of several variables. In 19

^{th}century such analogues were needed for applications. Then 3 theorems of this sort were proved, namely, the theorems of Gauss-Ostrogradsky (they discovered it independently of each other, and I am not sure if there was a third such mathematician or not), Green, and Stokes (some people, as far as I remember, attribute it to J.C. Maxwell, but it is called the Stokes theorem anyhow). The Gauss-Ostrogradsky theorem deals with integration over 3-dimensional domains in space, the Green theorem with 2 dimensional planar domains, and the Stokes theorem deals with integration over curved surfaces in the usual 3-dimensional space. I hope that I did not mix them up; the reason why this could happen is at the heart of the matter. Of course, I can check this in moment; but then an important point would be less transparent.Here are 3 theorems, clearly dealing with similar phenomena, but looking very differently and having different and not quite obvious proofs. But there are useful functions of more than 3 variables. What about them? There is a gap in my knowledge of the history of mathematics: I don’t know any named theorem dealing with more variables, except the final one. Apparently, nobody wrote even a moderately detailed history of the intermediate period between the 3 theorems above and the final version.

The final version is called the Stokes theorem again, despite Stokes has nothing do with it (except that he proved that special case). It applies to functions of any number of variables and even to functions defined on so-called smooth manifolds, the higher-dimensional generalization of surfaces. On manifolds, variables can be introduced only locally, near any point; and manifolds themselves are not assumed to be contained in some nice ambient space like the Euclidean space. So, the final version is much more general. And the final version has exactly the same form in all dimension, but the above mentioned 3 theorems are its immediate corollaries. This is why it is so easy to forget which names are associated to which particular case.

And – surprise! – the proof of general Stokes theorem is trivial. There is a nice short (but very dense) book “Calculus on manifolds” by M. Spivak devoted to this theorem. I recommend reading its preface to anybody interested in one way or another in mathematics. For mathematicians to know its content is a must. In the preface M. Spivak explains what happened. All the proofs are now trivial because all the difficulties were transferred into definitions. In fact, this Stokes theorem deals with integration not of functions, but of the so-called differential form, sometimes called also exterior forms. And this is a difficult notion. It requires very deep insights to discover it, and it still difficult to learn it. In the simplest situation, where nothing depends on any variables, it was discovered by H. Grassmann in the middle of 19

^{th}century. The discoveries of this German school teacher are so important that the American Mathematical Society published an English translation of one of his books few years ago. It is still quite a mystery how he arrived at his definitions. With the benefits of hindsight, one may say that he was working on geometric problems, but was guided by the abstract algebra (which did not exist till 1930). Later on his ideas were generalized in order to allow everything to depend on some variables (probably, E. Cartan was the main contributor here). In 1930ies the general Stokes theorem was well known to experts. Nowadays, it is possible to teach it to bright undergraduates in any decent US university, but there are not enough of such bright undergraduates. It should be in some of the required course for graduate students, but one can get a Ph.D. without being ever exposed to it.To sum up, the modern Stokes theorem requires learning a new and not very well motivated (apparently, even the Grassmann did not really understood why he introduced his exterior forms) notion of differential forms and their basic properties. Then you have a theorem from which all 19

^{th}century results follow immediately, and which is infinitely more general than all of them together. At the same time it has the same form for any number of variables and has a trivial proof (and the proofs of the needed theorems about differential forms are also trivial). There are no tricks in the proofs; they are very natural and straightforward. All difficulties were moved into definitions.Now, what is hard and what is difficult? New definitions of such importance are infinitely rarer than new theorems. Most mathematicians of even the highest caliber did not discover any such definition. Only a minority of Abel prize winner discovered anything comparable, and it is still too early to judge if their definitions are really important. So, discovering new concepts is hard and rare. Then there is a common prejudice against anything new (I am amazed that it took more than 15 years to convince public to buy HD TV sets, despite they are better in the most obvious sense), and there is a real difficulties in learning these new notions. For example, there is a notion of a derived category (it comes from the Grothendieck school), which most of mathematicians consider as difficult and hardly relevant. All proofs in this theory are utterly trivial.

Final note: the new conceptual proofs are often longer than the classical proofs even of the same results. This is because in the classical mathematics various tricks leading to shortcut through an argument are highly valued, and anything artificial is not valued at all in the conceptual mathematics.

Next post: The Hungarian Combinatorics from an Advanced Standpoint.

Dear sowa,

ReplyDeleteLet me start with some questions from the title of your post and related with my interpretation of some of your opinions I read in other discussions. Is indeed combinatorics the antipodal of conceptual mathematics? Are you suggesting that combinatorics can not be managed in a conceptual way?, I mean, Are you suggesting that any attempt to approach the area of mathematics known as combinatorics from a conceptual point of view will fail because the nature of the combinatorics itself? What would be, from your point of view, the role of combinatrics as a mathematical area? Is combinatorics just a kind of "heuristic" that guide our thoughts and eventually leads to a conceptual proofs?

My reply is again quite long. May be it just because answers are always longer than questions. Or, may be, I should put more efforts into the writing.

DeleteSo, please, see The Hungarian Combinatorics from an Advanced Standpoint".

This article of yours is really very inspiring. But I would one like to ask one more question. Do you consider Galois theory an example of conceptual mathematics? What are some other examples of conceptual mathematics in your opinion?

ReplyDeleteDear Bourbakifan,

DeleteYes, of course, I do. Galois theory is one of the best examples. It does not seem right to give examples, since this kind of mathematics does not consist of isolated fragments. All books by N. Bourbaki belong to conceptual mathematics, for example. Commutative algebra, Lie groups and Lie algebras, Harmonic analysis (the book "Spectral theory", only two chapters of which were published). Most of algebraic topology (some parts degenerated into complicated computations). Algebraic geometry serves in these discussions as THE example. Of course, it has combinatorial fragments, but no noticeable Hungarian-combinatorial fragments till now.

The balance of conceptual and Hungarian-combinatorial in any given field of mathematics depends, to a big extent, on the tastes of practitioners. Almost everything may be turned into Hungarian mathematics: one needs only to focus on non-conceptual questions. For example, for every existence theorem one may ask how to find that existening object. Most of the pure existence proofs can be transformed into constructive proofs (it is a theorem, but I do not remember the exact statement). Then you ask how long it takes, start to search for estimates, and so you open the gates to Hunns (Hungary is named after them - amuzing conincedence).