About the title
About the title
I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.
I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.
Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.
Sunday, March 31, 2013
This is a partial reply to a comment by vznvzn in Gowers's blog.
Combinatorics is most resolutely not "a new way of looking at mathematics". It is very old, definitely known for hundreds years. Perhaps, it was known in the ancient Babylon already.
And Erdős is not a "contrarian". His work belongs to the most widely practiced tradition in analysis. As a crude approximation, one can say that this tradition originates in the calculus of Leibniz, which is quite different from the calculus of Newton. Even most of mathematicians are not aware of the difference between the Leibniz calculus and the Newton calculus. This is not surprising at all, since only the Leibniz calculus is taught nowadays.
It is the Grothendieck's way of looking at mathematics, the one which I advocate, which is new. This new, conceptual, way of doing mathematics immediately met strong resistance.
And in some cases its opponents won. For example, the early work of Grothedieck in functional analysis had no influence till analysts managed to translate part of his ideas into their standard language. It seems that only quite recently some of analysts realized that a lot was lost in this translation, and done a better translation, closer to the spirit of the original work of Grothendieck.
Another example is provided by the invasion of this new style and even some technical concepts developed in this style into the analysis of several complex variables. This was intolerable for the classical complex analysts, and they started to stress problems about which it was more or less clear that they can be approached by familiar methods. They succeeded, and already in the 1970ies a prominent representative of the classical school, W. Rudin, was able proudly say that Grothendieck's methods (he was more specific) disappeared into background. He did not publish his opinion at the time, but attempted to insult a prominent representative of the new style, A. Borel by such statements. A quarter of century (or more) later he told this story in an autobiographical book. (W. Rudin is a good mathematician and the author of several exceptionally good books, but A. Borel was a brilliant mathematician.)
Now we are observing a much broader attempt, apparently led by T. Gowers, to eliminate the conceptual way of doing mathematics completely. At the very least T. Gowers is the face of this movement for the mathematical public. After this T. Gowers envisions an elimination of the mathematics itself by relegating it to computers. It looks like the second step is the one most dear to his heart (see the discussion in his blog about a year ago). It seems that combinatorics is much more amenable to the computerization (although I don't believe that even this is possible) than the conceptual mathematics.
Actually, it is not hard to believe that computers can efficiently produce proofs of a wide class of theorem (the proofs will be unreadable to humans, but still some will consider them as proofs). But for the conceptual mathematics it is the definition, and not the proofs, which is important. The conceptual mathematics is looking for new definitions interesting to humans. The proof and theorems serve as a stimulus for work and as a necessary testing ground for new definitions. If a new definition does not help to prove new theorems or to simplify the proofs of old ones, it is not interesting for humans.
There is only one way to get rid of the conceptual mathematics, namely, the Wigner shift of the second kind. The new generation should be told that combinatorics is new, that it is the field to work in, and very soon we will see the young people only the ones doing combinatorics. Since mathematics is to a huge extent "a young people’s game", such a shift can be accomplished very quickly.
P.S. It is worth to note that there are two branches of combinatorics, and one of them is already belongs to the conceptual mathematics. Some people (like D. Zeilberger) are intentionally ignoring this to promote the non-conceptual kind.
Next post: D. Zeilberger's Opinions 1 and 62.
Wednesday, March 27, 2013
This is partially a reply to a comment by Emmanuel Kowalski.
There is a phenomenon which I can hardly explain. For example, E. Kowalski said in the linked comment that he cannot comment on my statements (it seems that he is not addressing me at all, he is just commenting) without making assumptions about me, i.e. without using ad hominem arguments. Why he cannot write about my ideas without knowing my personal details?
It seems that E. Kowalski suspects that my opinions are somehow deducible from my personal life circumstances, my biography, etc.
In fact, it is possible that I have more experience due to my biography than most of other mathematicians. This is even partially the case, but only partially, and this does not affect my opinions about mathematical theories. These aspects of my life experience are quite obvious already in the discussion in the Gowers's blog.
But my opponents do not seem to adhere to this theory, which is obviously favoring me. Rather, it seems that they believe I am not knowledgeable enough or plain stupid. Would this be the case, my conclusions would be, most likely, wrong and, moreover, it would be quite easy to refute them without making any assumptions about me.
In fact, one of the main reasons for my semi-anonymity is that I would like to see my arguments and opinions evaluated on their intrinsic merits, without knowing if am I married or not, how good or bad is my employer - name anything you would like to know.
This phenomenon is not limited to my opponents. Somebody, apparently sympathetic to me, wrote: I’d be very interested in any small mathematical insight you might be willing to share, if you’re whom I conjecture you are". So, even my mathematical insights are interesting or not depending on who I am. For me, the interest of a mathematical (or “meta-mathematical”, like this discussion) insight does not depend on whom it belongs.
Of course, sometimes the authorship matters. But assumptions about the author still do not. Let us imagine that it is 1976 today (many other years will work also). Then any person interested in algebra, algebraic topology, or Grothendieck algebraic geometry knows that all papers by D. Quillen to date are very interesting and often contain incredibly deep insights. It is only natural to be interested in any new paper by Quillen. I don’t know anybody working now and comparable in this respect to 1976 Quillen; this is the reason for an exercise in time travel.
At the same time, if I see an interesting result, theory, insight, it does not matter for me if it is published in Annals or in Amer. Math. Monthly, who is the author, and what problems in life she or he has, if any.
In both situations the insights of a person lead to her or his reputation. The reputation itself does not make all insights of this person interesting. Only in rare cases the reputation may suggest that it is worthwhile to pay attention to works of a person.
Unfortunately, this seems to be not true nowadays at least in the West. The relatively recent cult of Fields medals makes the work and the area of any new winner instantly interesting. In the past the presenters of the awarded medals used to stress that there is at least 30-40 young mathematicians with comparable achievements. Not anymore. In the US, one will be monetarily rewarded for a trivial paper in Annals, but never for an expository paper (and no books, please, I was told many years ago), no matter how deep its insights. Papers in a European journal are treated by default as second rate papers. An insight of a person working in Ivy League is more valuable that a much deeper insight of a person working in Alabama. And so on.
Finally, I would like to make an offer to Emmanuel Kowalski (only to him).
Dear Emmanuel Kowalski,
You may ask me in comments here anything you would like to know. I do not promise to answer all the questions. I will evaluate to what extent my answers will help to sort out my real life identity, and will not answer to the questions which are really helpful in this respect. In particular, I will not tell what my area of research is. I will not answer to the questions which I will deem to be too personal. But if finding out my identity is not your goal, here is your chance to replace your assumptions by the actual knowledge.
Next post: Combinatorics is not a new way of looking at mathematics.
Monday, March 25, 2013
This is a reply to a comment of T. Gowers.
Yesterday I left remarks about “hard” arguments and Bott periodicity without any comments. Here are few.
First, the meaning of the word "hard" varies from person to person. There is a fairly precise definition in analysis, due to Hardy. Still, I fail to use it for classification of, say, Lars Ahlfors work: is it hard or soft? I was told once that it is not "hard analysis", but wasn't told anything meaningful why. For me, Ahlfors is the greatest analyst of the last century (let us assume that the 20th century started around 1910, at least - in order to avoid hardly relevant comparisons).
Notice that the terminology is already fairly misleading: the opposite of “hard” is not “easy” (despite the hard analysis is assumed to be difficult and hard to work in). It is “soft”. What dichotomy is understood here? Clearly, finding the right definitions is not an easy thing; more often than not it is very difficult. The defined objects may turn out to either “hard” or “soft” depending on what we wanted. For definitions I see only one meaningful interpretation: objects are hard if they are rigid (like Platonic solids), and soft if they can be easily deformed (and the space of deformations is highly dimensional) without losing their key characteristics. It seems that “hard” theories are very often the ones dealing with “soft” objects.
But I suspect that the people using the hard-soft terminology will disagree with me. At the same time in the conceptual mathematics there is no such dichotomy at all and it is impossible to acquire any experience in using it.
The example of the Bott periodicity theorem is a good testing ground. The situation with the Bott periodicity is more or less opposite to what you wrote about it. First of all, it is not a black box to be used without opening it.
The first proof was based on the Morse theory for infinitely dimensional space of curves in some classical manifolds. (A nice idea of Morse reduces the problem at once to the finitely dimensional situation.) This is result is crucial for topological K-theory; it is build into its structure. But I never saw any detailed exposition when the original theorem of Bott was used as black box for developing the topological K-theory. The theorem seems to be too weak for this, it provides for one point space the result needed for a wide class of topological spaces. Probably, the machinery of algebraic topology allows deducing this particular result just from the result for one-point space, but this definitely cannot be done without reworking the Bott theorem in order to get a more explicit result first. Atiyah in his book uses another proof (due to him and Bott), which has the advantage of being “elementary” and giving the result for all reasonable spaces without any intermediaries, and the disadvantage of being the most obscure one. Later on, the K-theory (and, hence, the Bott periodicity) were used in order to prove the Atiyah-Singer index theorem. The index theorem has an advanced version, the index theorem for families (of elliptic operators).
In fact, really useful theorem is the index theorem for families, not the original index theorem. After the second proof of the index theorem was found, which imitated Grothendieck proof of the Grothendieck-Riemann-Roch theorem and allowed to prove the index theorem for families (the first one does not), Atiyah used it to give a new proof of the Bott periodicity. One may suspect a circular argument here, but there are none. A carefully excised fragment of this proof does not depend on the Bott periodicity, and if combined with an algebraic idea due to Atiyah, led to a new proof of Bott’s periodicity. This proof turned out to be the most important one. The subject of analytic K-theory is to a big extent an attempt to use the ideas of this proof in as general situation as possible, and to apply the results. The main point here that people working in analytic K-theory are not using Bott periodicity as a black box; they are thinking about what is really inside this box. By now we have at least 8 different (substantially different) proofs of Bott periodicity, and the progress in the questions related to the Bott periodicity usually requires rethinking the theorem and its proof, not using it as a black box. Perhaps, because of all this, people prefer to speak about the phenomenon of Bott periodicity and not about Bott’s theorem.
Next post: To appear.
Here is a reply to a comment by JSE.
I just checked the first version of Green-Tao paper in arXiv (the file is in my computer). The Introduction presents paper as a paper proving a long-standing conjecture about prime numbers. The Erdős' conjecture on arithmetic progressions is not even mentioned. Of course, most of the non-experts read only the introduction.
Your impression could (and, actually, should) be different from that of a layman mathematician: you are an expert in the field. And my claim wasn’t that nobody realized that the Green-Tao paper is not a work about primes at all. I claim only that this is far from being obvious, and a lot of mathematicians thought that it is a work about primes. Primes have a special (and well-deserved) status in mathematics, everything new about primes seems to be much more valuable than some result about a class of subsets of the set of natural numbers.
Then you make a quite interesting claim, and even using ALL CAPS. I must admit that I also do not care now about existence of arbitrarily long arithmetic progression of primes after the spell of Szemerédi's theorem and Gowers’s work about it disappeared. We differ in that you are interested in arithmetic progressions in other sets, despite being not interested in the set of primes. I am not. I am not interested in arithmetic progressions in other groups even more definitely. Pretending for a moment that I still believe in the theory of “Two cultures”, I see such questions as an easy way to turn some conceptual notions (the notions of primes and groups in this case) into a playground for the “second culture” mathematicians, and an opportunity for them to mingle with the ones working in the “first culture”. Another standard way to do this is to ask about “best estimates” or simply the existence of any estimate for an existence theorem. (It is hardly known that most “pure existence” proofs can be transformed into proofs with estimates according to a hardly known result of logician G. Kreisel – known so little that I am going to have some quite difficult time looking for a reference.)
Let us to step back to the source of all these questions about arithmetic progressions, to the theorem of van der Warden. I never thought that its statement is important or interesting. But I found the proof being interesting (in an agreement with the maxim that proofs are more important than theorems). It was the most complicated and powerful use of (iterated) mathematical induction that I saw at time I learned it. I still think that this aspect of the proof is interesting. Of course, the real questions are concerned not with the usual induction but with the transfinite induction. To the best of my knowledge, Martin’s proof of the determinacy of Borel games still holds the place of a purely mathematical theorem (in contrast with advanced set theory) requiring most complicated form of the transfinite induction. Apparently, it is also the only mathematical result which needs the axiom of replacement for its proof (namely, F of ZF, Fraenkel’s axiom of the Zermelo-Fraenkel set theory). This is hardly a mainstream topic nowadays (for either of “cultures”), but for me it is really deep and interesting.
Next post: Hard, soft, and Bott periodicity - Reply to T. Gowers.
Sunday, March 24, 2013
Here is a reply to a comment by T. Gowers about my post My affair with Szemerédi-Gowers mathematics.
I agree that we have no way to know what will happen with combinatorics or any other branch of mathematics. From my point of view, your “intermediate possibility” (namely, developing some artificial way of conceptualization) does not qualify as a way to make it “conceptual” (actually, a proper conceptualization cannot be artificial essentially by the definition) and is not an attractive perspective at all. By the way, the use of algebraic geometry as a reference point in this discussion is purely accidental. A lot of other branches of mathematics are conceptual, and in every branch there are more conceptual and less conceptual subbranches. As is well known, even Deligne’s completion of proof of Weil’s conjectures was not conceptual enough for Grothendick.
Let me clarify how I understand the term “conceptual”. A theory is conceptual if most of the difficulties were moved from proofs to definitions (i.e. to concepts), or they are there from the very beginning (which may happen only inside of an already conceptual theory). The definitions may be difficult to digest at the first encounter, but the proofs are straightforward. A very good and elementary example is provided by the modern form of the Stokes theorem. In 19th century we had the fundamental theorem of calculus and 3 theorems, respectively due to Gauss-Ostrogradsky, Green, and Stokes, dealing with more complicated integrals. Now we have only one theorem, usually called Stokes theorem, valid for all dimensions. After all definitions are put in place, its proof is trivial. M. Spivak nicely explains this in the preface to his classics, “Calculus on manifolds”. (I would like to note in parentheses that if the algebraic concepts are chosen more carefully than in his book, then the whole theory would be noticeably simpler and the definitions would be easier to digest. Unfortunately, such approaches did not found their way into the textbooks yet.) So, in this case the conceptualization leads to trivial proofs and much more general results. Moreover, its opens the way to further developments: the de Rham cohomology turns into the most natural next thing to study.
I think that for every branch of mathematics and every theory such a conceptualization eventually turns into a necessity: without it the subject grows into a huge body of interrelated and cross-referenced results and eventually falls apart into many to a big extent isolated problems. I even suspect that your desire to have a sort of at least semi-intelligent version of MathSciNet (if I remember correctly, you wrote about this in your GAFA 2000 paper) was largely motivated by the difficulty to work in such a field.
This naturally leads us to one more scenario (the 3rd one, if we lump together your “intermediate” scenario with the failure to develop a conceptual framework) for a not conceptualized theory: it will die slowly. This happens from time to time: a lot of branches of analysis which flourished at the beginning of 20th century are forgotten by now. There is even a recent example involving a quintessentially conceptual part of mathematics and the first Abel prize winner, J.-P. Serre. As H. Weyl stressed in his address to 1954 Congress, the Fields medal was awarded to Serre for his spectacular work (his thesis) on spectral sequences and their applications to the homotopy groups, especially to the homotopy groups of spheres (the problem of computing these groups was at the center of attention of leading topologists for about 15 years without any serious successes). Serre did not push his method to its limits; he already started to move to first complex manifolds, then algebraic geometry, and eventually to the algebraic number theory. Others did, and this quickly resulted in a highly chaotic collection of computations with the Leray-Serre spectral sequences plus some elementary consideration. Assuming the main properties of these spectral sequences (which can be used without any real understanding of spectral sequences), the theory lacked any conceptual framework. Serre lost interest even in the results, not just in proofs. This theory is long dead. The surviving part is based on further conceptual developments: the Adams spectral sequence, then the Adams-Novikov spectral sequence. This line of development is alive and well till now.
Another example of a dead theory is the Euclid geometry.
In view of all this, it seems that there are only the following options for a mathematical theory or a branch of mathematics: to continuously develop proper conceptualizations or to die and have its results relegated to the books for gifted students (undergraduate students in the modern US, high school students in some other places and times).
Next post: Reply to JSE.