About the title

About the title

I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).

The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.



I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.



Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.
Showing posts with label reply. Show all posts
Showing posts with label reply. Show all posts

Friday, August 23, 2013

The role of the problems

Previous post: Is algebraic geometry applied or pure mathematics?


From a comment by Tamas Gabal:

“I also agree that many 'applied' areas of mathematics do not have famous open problems, unlike 'pure' areas. In 'applied' areas it is more difficult to make bold conjectures, because the questions are often imprecise. They are trying to explain certain phenomena and most efforts are devoted to incremental improvements of algorithms, estimates, etc.”

The obsession of modern pure mathematicians with famous problems is not quite healthy. The proper role of such problems is to serve as a testing ground for new ideas, concepts, and theories. The reasons for this obsession appear to be purely social and geopolitical. The mathematical Olympiads turned in a sort of professional sport, where the winner increases the prestige of their country. Fields medals, Clay’s millions, zillions of other prizes increase the social role of problem solving. The reason is obvious: a solution of a long standing problem is clearly an achievement. In contrast, a new theory may prove its significance in ten year (and this will disqualify its author for the Fields medal), but may prove this only after 50 years or even more, like Grassmann’s theory. By the way, this is the main difficulity in evaluating J. Lurie's work.

Poincaré wrote that problems with a “yes/no” answer are not really interesting. The vague problems of the type of explaining certain phenomena are the most interesting ones and most likely to lead to some genuinely new mathematics. In contrast with applied mathematics, an incremental progress is rare in the pure mathematics, and is not valued much. I am aware that many analysts will object (say, T. Tao in his initial incarnation as an expert in harmonic analysis), and may say that replacing 15/16 by 16/17 in some estimate (the fractions are invented by me on the spot) is a huge progress comparable with solving one of the Clay problems. Still, I hold a different opinion. With these fractions the goal is certainly to get the constant 1, and no matter how close to 1 you will get, you will still need a radically new idea to get 1.

It is interesting to note that mathematicians who selected the Clay problems were aware of the fact that “yes/no” answer is not always the desired one. They included into description of prize a clause to the effect that a counterexample (a “no” answer) for a conjecture included in the list does not automatically qualifies for the prize. The conjectures are such that a “yes” answer always qualifies, but a “no” answer is interesting only if it really clarifies the situation.


Next post: Graduate level textbooks I.

Is algebraic geometry applied or pure mathematics?

Previous post: About some ways to work in mathematics.

From a comment by Tamas Gabal:

“This division into 'pure' and 'applied' mathematics is real, as it is understood and awkwardly enforced by the math departments in the US. How is algebraic geometry not 'applied' when so much of its development is motivated by theoretical physics?”

Of course, the division into the pure and applied mathematics is real. They are two rather different types of human activity in every respect (including the role of the “problems”). Contrary to what you think, it is hardly reflected in the structure of US universities. Both pure and applied mathematics belong to the same department (with few exceptions). This allows the university administrators to freely convert positions in the pure mathematics into positions in applied mathematics. They never do the opposite conversion.

Algebraic geometry is not applied. You will be not able to fool by such statement any dean or provost. I am surprised that it is, apparently, not obvious anymore. Here are some reasons.

1. First of all, the part of theoretical physics in which algebraic geometry is relevant is itself as pure as pure mathematics. It deals mostly with theories which cannot be tested experimentally: the required conditions existed only in the first 3 second after the Big Bang and, probably, only much earlier. The motivation for these theories is more or less purely esthetical, like in pure mathematics. Clearly, these theories are of no use in the real life.

2. Being motivated by outside questions does not turn any branch of mathematics into an applied branch. Almost all branches of mathematics started from some questions outside of it. To qualify as applied, a theory should be really applied to some outside problems. By the way, this is the main problem with what administrators call “applied mathematics”. While all “applied mathematicians” refer to applications as a motivation of their work, their results are nearly always useless. Moreover, usually they are predictably useless. In contrast, pure mathematicians cannot justify their research by applications, but their results eventually turn out to be very useful.

3. Algebraic geometry was developed as a part of pure mathematics with no outside motivation. What happens when it interacts with theoretical physics? The standard pattern over the last 30-40 years is the following. Physicists use they standard mode of reasoning to state, usually not precisely, some mathematical conjectures. The main tool of physicists not available to mathematicians is the Feynman integral. Then mathematicians prove these conjectures using already available tools from pure mathematics, and they do this surprisingly fast. Sometimes a proof is obtained before the conjecture is published. About 25 years ago I.M. Singer (of the Atiyah-Singer theorem fame) wrote an outline of what, he hoped, will result from the interaction of mathematics with the theoretical physics in the near future. In one phrase, one may say that he hoped for infinitely-dimensional geometry as nice and efficient as the finitely-dimensional geometry is. This would be a sort of replacement for the Feynman integral. Well, his hopes did not materialize. The conjectures suggested by physicists are still being proved by finitely-dimensional means; physics did not suggested any way even to make precise what kind of such infinitely-dimensional geometry is desired, and there is no interesting or useful genuinely infinitely-dimensional geometry. By “genuinely” I mean “not being essentially/morally equivalent to a unified sequence of finitely dimensional theories or theorems”.

To sum up, nothing dramatic resulted from the interaction of algebraic geometry and theoretical physics. I don not mean that nothing good resulted. In mathematics this interaction resulted in some quite interesting theorems and theories. It did not change the landscape completely, as Grothendieck’s ideas did, but it made it richer. As of physics, the question is still open. More and more people are taking the position that these untestable theories are completely irrelevant to the real world (and hence are not physics at all). There are no applications, and hence the whole activity cannot be considered as an applied one.


Next post: The role of the problems.

Wednesday, August 21, 2013

About some ways to work in mathematics

Previous post: New ideas.


From a comment by Tamas Gabal:

“...you mentioned that the problems are often solved by methods developed for completely different purposes. This can be interpreted in two different ways. First - if you work on some problem, you should constantly look for ideas that may seem unrelated to apply to your problem. Second - focus entirely on the development of your ideas and look for problems that may seem unrelated to apply your ideas. I personally lean toward the latter, but your advice may be different.”

Both ways to work are possible. There are also other ways: for example, not to have any specific problem to solve. One should not suggest one way or another as the right one. You should work in the way which suits you more. Otherwise you are unlikely to succeed and you will miss most of the joy.

Actually, my statement did not suggest either of these approaches. Sometimes a problem is solved by discovering a connection between previously unrelated fields, and sometimes a problem is solved entirely within the context in was posed originally. You never know. And how one constantly looks for outside ideas? A useful idea may be hidden deep inside of some theory and invisible otherwise. Nobody studies the whole mathematics in the hope that this will help to solve a specific problem.

I think that it would be better not to think in terms of this alternative at all. You have a problem to solve, you work on it in all ways you can (most of approaches will fail – this is the unpleasant part of the profession), and that’s it. The advice would be to follow development in a sufficiently big chunk of mathematics. Do not limit yourself by, say, algebra (if your field is algebra). The division of mathematics into geometry, algebra, and analysis is quite outdated. Then you may suddenly learn about some idea which will help you.

Also, you do not need to have a problem to begin with. Usually a mathematician starts with a precisely stated problem, suggested by the Ph.D. advisor. But even this is not necessary.

My own way to work is very close to the way M. Atiyah described as his way of work in an interview published in “The Mathematical Intelligencer” in early 1980ies (of course, I do not claim that the achievements are comparable). This interview is highly recommended; it is also highly recommended by T. Gowers. I believe that I explained how I work to a friend (who asked a question similar to yours one) before I read this interview. Anyhow, I described my way to him as follows. I do not work on any specific problem, except of my own working conjectures. I am swimming in mathematics like in a sea or river and look around for interesting things (the river of mathematics carries much more stuff than a real river). Technically this means that I follow various sources informing about the current developments, including talks, I read papers, both current and old ones, and I learn some stuff from textbooks. An advanced graduate level textbook not in my area is my favorite type of books in mathematics. I am doing this because this is that I like to do, not because I want to solve a problem or need to publish 12 papers during next 3 years. From time to time I see something to which, I feel, I can contribute. From time to time I see some connections which were not noticed before.

My work in “my area” started in the following way. I was familiar with a very new theory, which I learned from the only available (till about 2-3 years ago!) source: a French seminar devoted to its exposition. The author never wrote down any details. Then a famous mathematician visited us and gave a talk about a new (not published yet) remarkable theorem of another mathematician (it seems to me that it is good when people speak not only about their own work). The proof used at a key point an outside “Theorem A” by still another mathematicians. The speaker outlined its proof in few phrases (most speakers would just quote Theorem A, so I was really lucky). Very soon I realized (may be the same day or even during the talk) that the above new theory allows at least partially transplant Theorem A in a completely different context following the outline from the talk. But there is a problem: the conclusion of Theorem A tells that you are either in a very nice generic situation, or in an exceptional situation. In my context there are obvious exceptions, but I had no idea if there are non-obvious exceptions, and how to approach any exceptions. So, I did not even started to work on any details. 2-3 years later a preprint arrived in the mail. It was sent to me by reasons not related at all with the above story; actually, I did not tell anybody about these ideas. The preprint contained exactly what I needed: a proof that there are only obvious exceptional cases (not mentioning Theorem A). Within a month I had a proof of an analogue of Theorem A (this proof was quickly replaced by a better one and I am not able to reproduce it). Naturally, I started to look around: what else can be done in my context. As it turned out, a lot. And the theory I learned from that French seminar is not needed for many interesting things.

Could all this be planned in advance following some advice of some experienced person? Certainly, not. But if you do like this style, my advice would be: work this way. You will be not able to predict when you will discover something interesting, but you will discover. If this style does not appeal to you, do not try.

Note that this style is opposite to the Gowers’s one. He starts with a problem. His belief that mathematics can be done by computers is based on a not quite explicit assumption that his is the only way, and he keeps a place for humans in his not-very-science-fiction at least at the beginning: humans are needed as the source of problems for computers. I don’t see any motivation for humans to supply computers with mathematical problems, but, apparently, Gowers does. More importantly, a part of mathematics which admits solutions of its problems by computers will very soon die out. Since the proofs will be produced and verified by computers, humans will have no source of inspiration (which is the proofs).


Next post: Is algebraic geometry applied or pure mathematics?

Tuesday, August 20, 2013

New ideas

Previous post: Did J. Lurie solved any big problem?


Tamas Gabal asked:

“Dear Sowa, in your own experience, how often genuinely new ideas appear in an active field of mathematics and how long are the periods in between when people digest and build theories around those ideas? What are the dynamics of progress in mathematics, and how various areas are different in this regard?”

Here is my partial reply.


This question requires a book-length answer; especially because it is not very precisely formulated. I will try to be shorter. :- )

First of all, what should be considered as genuinely new ideas? How new and original they are required to be? Even for such a fundamental notion as an integral there are different choices. At one end, there is only one new idea related to it, which predates the discovery of the mathematics itself. Namely, it is idea of the area. If we lower our requirements a little, there will be 3 other ideas, associated with the works or Archimedes, Lebesque, and hardly known by now works of Danjoy, Perron, and others. The Riemann integral is just a modern version of Archimedes and other Ancient Greek mathematician. The Danjoy integral generalizes the Lebesgue one and has some desirable properties which the Lebesgue integral has not. But it turned out to be a dead end without any applications to topics of general interest. I will stop my survey of the theory of integration here: there are many other contributions. The point is that if we lower our requirements further, then we have much more “genuinely new” ideas.

It would be much better to speak not about some vague levels of originality, but about areas of mathematics. Some ideas are new and important inside the theory of integration, but are of almost no interest for outsiders.

You asked about my personal experience. Are you asking about what my general knowledge tells me, or what happened in my own mathematical life? Even if you are asking about the latter, it is very hard to answer. At the highest level I contributed no new ideas. One may even say that nobody after Grothendieck did (although I personally believe that 2 or 3 other mathematicians did), so I am not ashamed. I am not inclined to classify my work as analysis, algebra, geometry, topology, etc. Formally, I am assigned to one of these boxes; but this only hurts me and my research. Still, there is a fairly narrow subfield of mathematics to which I contributed, probably, 2 or 3 ideas. According to A. Weil, if a mathematician had contributed 1 new idea, he is really exceptional; most of mathematicians do not contribute any new ideas. If a mathematician contributed 2 or 3 new ideas, he or she would be a great mathematician, according to A. Weil. By this reason, I wrote “2 or 3” not without a great hesitation. I do not overestimate myself. I wanted to illustrate what happens if the area is sufficiently narrow, but not necessarily to the limit. The area I am taking about can be very naturally partitioned further. I worked in other fields too, and I hope that these papers also contain a couple of new ideas. For sure, they are of a level lower than the one A. Weil had in mind.

On one hand side this personal example shows another extreme way to count the frequency of new ideas. I don’t think that it would be interesting to lower the level further. Many papers and even small lemmas contain some little new ideas (still, much more do not). On the other side, this is important on a personal level. Mathematics is a very difficult profession, and it lost almost all its appeal as a career due to the changes of the universities (at least in the West, especially in the US). It is better to know in advance what kind of internal reward you may get out of it.

As of the timeframe, I think that a new idea is usually understood and used within a year (one has to keep in mind that mathematics is a very slow art) by few followers of the discoverer, often by his or her students or personal friends. Here “few” is something like 2-5 mathematicians. The mathematical community needs about 10 years to digest something new, sometimes it needs much more time. It seems that all this is independent of the level of the contribution. The less fundamental ideas are of interest to fewer people. So they are digested more slowly, despite being easier.

I don’t have much to say about the dynamics (what is the dynamics here?) of progress in mathematics. The past is discussed in many books about history of mathematics; despite I don’t know any which I could recommend without reservations. The only exception is the historical notes at the ends of N. Bourbaki books (they are translated into English and published as a separate book by Springer). A good starting point to read about 20th century is the article by M. Atiyah, “Mathematics in the 20th century”, American Mathematical Monthly, August/September 2001, p. 654 – 666. I will not try to predict the future. If you predict it correctly, nobody will believe you; if not, there is no point. Mathematicians usually try to shape the future by posing problems, but this usually fails even if the problem is solved, because it is solved by tools developed for other purposes. And the future of mathematics is determined by tools. A solution of a really difficult problem often kills an area of research, at least temporarily (for decades minimum).

My predictions for the pure mathematics are rather bleak, but they are based on observing the basic trends in the society, and not on the internal situation in mathematics. There is an internal problem in mathematics pointed out by C. Smorinsky in the 1980ies. The very fast development of mathematics in the preceding decades created many large gaps in the mathematical literature. Some theories lack readable expositions, some theorem are universally accepted but appear to have big gaps in their proofs. C. Smorinsky predicted that mathematicians will turn to expository work and will clear this mess. He also predicted more attention to the history of mathematics. A lot of ideas are hard to understand without knowing why and how they were developed. His predictions did not materialize yet. The expository work is often more difficult than the so-called “original research”, but it is hardly rewarded.


Next post: About some ways to work in mathematics.

Sunday, August 4, 2013

Did J. Lurie solved any big problem?

Previous post: Guessing who will get Fields medals - Some history and 2014.

Tamas Gabal asked the following question.

I heard a criticism of Lurie's work, that it does not contain startling new ideas, complete solutions of important problems, even new conjectures. That he is simply rewriting old ideas in a new language. I am very far from this area, and I find it a little disturbing that only the ultimate experts speak highly of his work. Even people in related areas can not usually give specific examples of his greatness. I understand that his objectives may be much more long-term, but I would still like to hear some response to these criticisms.

Short answer: I don't care. Here is a long answer.

Well, this is the reason why my opinion about Lurie is somewhat conditional. As I already said, if an impartial committee confirms the significance of Lurie’s work, it will remove my doubts and, very likely, will stimulate me to study his work in depth. It is much harder to predict what will be the influence of the actual committee. Perhaps, I will try to learn his work in any case. If he will not get the medal, then in the hope to make sure that the committee is wrong.

I planned to discuss many peculiarities of mathematical prizes in another post, but one of these peculiarities ought to be mentioned now. Most of mathematical prizes go to people who solved some “important problems”. In fact, most of them go to people who made the last step in solving a problem. There is recent and famous example at hand: the Clay $1,000,000.00 prize was awarded to Perelman alone. But the method was designed by R. Hamilton, who did a huge amount of work, but wasn’t able to made “the last step”. Perhaps, just because of age. As Perelman said to a Russian news agency, he declined the prize because in his opinion Hamilton’s work is no less important than his own, and Hamilton deserves the prize no less than him. It seems that this reason still not known widely enough. To the best of my knowledge, it was not included in any press-release of the Clay Institute. The Clay Institute scheduled the award ceremony like they knew nothing, and then held the ceremony as planned. Except Grisha Perelman wasn’t present, and he did not accept the prize in any sense.

So, the prizes go to mathematicians who did the last step in the solution of a recognized problem. The mathematicians building the theories on which these solutions are based almost never get Fields medals. Their chances are more significant when prize is a prize for the life-time contribution (as is the case with the Abel prize). There are few exceptions.

First of all, A. Grothendieck is an exception. He proved part of the Weil conjectures, but not the most important one (later proved by P. Deligne). One of the Weil conjectures (the basic one) was independently proved by B. Dwork, by a completely different and independent method, and published earlier (by the way, this is fairly accessible and extremely beautiful piece of work). The report of J. Dieudonne at the 1966 Congress outlines a huge theory, to a big extent still not written down then. It includes some theorems, like the Grothendieck-Riemann-Roch theorem, but: (i) GRR theorem does not solve any established problem, it is a radically new type of a statement; (ii) Grothendieck did not published his proof, being of the opinion that the proof is not good enough (an exposition was published by Borel and Serre); (iii) it is just a byproduct of his new way of thinking.

D. Quillen (Fields medal 1978) did solve some problems, but his main achievement is a solution of a very unusual problem: to give a good definition of so-called higher algebraic K-functors. It is a theory. Moreover, there are other solutions. Eventually, it turns out that they all provide equivalent definitions. But Quillen’s definitions (actually, he suggested two) are much better than others.

So, I do not care much if Lurie solved some “important problems” or not. Moreover, in the current situation I rather prefer that he did not solved any well-known problems, if he will get a Fields medal. The contrast with the Hungarian combinatorics, which is concentrated on statements and problems, will make the mathematics healthier.

Problems are very misleading. Often they achieve their status not because they are really important, but because a prize was associated with them (Fermat Last Theorem), or they were posed by a famous mathematicians. An example of the last situation is nothing else but the Poincaré Conjecture – in fact, Poincaré did not stated it as a conjecture, he just mentioned that “it would be interesting to know the answer to the following question”. It is not particularly important by itself. It claims that one difficult to verify property (being homeomorphic to a 3-sphere) is equivalent to another difficult to verify property (having trivial fundamental group). In practice, if you know that the fundamental group is trivial, you know also that your manifold is a 3-sphere.

Next post: New ideas.

Sunday, July 28, 2013

2014 Fields medalists?

Previous post: New comments to the post "What is mathematics?"

I was asked by Tamas Gabal about possible 2014 Fields medalists listed in an online poll. I am neither ready to systematically write down my thoughts about the prizes in general and Fields medals in particular, nor to predict who will get 2014 medals. I am sure that the world would be better without any prizes, especially without Fields medals. Also, in my opinion, no more than two persons deserve 2014 Fields medals. Instead of trying to argue these points, I will quote my reply to Tamas Gabal (slightly edited).

Would I know who the members of the Fields medal committee are, I would be able to predict medalists with 99% confidence. But the composition of the committee is a secret. In the past, the situation was rather different. The composition of the committee wasn't important. When I was just a second year graduate student, I compiled a list of 10 candidates, among whom I considered 5 to have significantly higher chances (I never wrote down this partition, and the original list is lost for all practical purposes). All 4 winners were on the list. I was especially proud of predicting one of them; he was a fairly nontraditional at the time (or so I thought). I cannot do anything like this now without knowing the composition of the committee. Recent choices appear to be more or less random, with some obvious exceptions (like Grisha Perelman).

Somewhat later I wrote:

In the meantime I looked at the current results of that poll. Look like the preferences of the public are determined by the same mechanism as the preferences for movie actors and actresses: the name recognition.

Tamas Gabal replied:

Sowa, when you were a graduate student and made that list of possible winners, did you not rely on name recognition at least partially? Were you familiar with their work? That would be pretty impressive for a graduate student, since T. Gowers basically admitted that he was not really familiar with the work of Fields medalists in 2010, while he was a member of the committee. I wonder if anyone can honestly compare the depth of the work of all these candidates? The committee will seek an opinion of senior people in each area (again, based on name recognition, positions, etc.) and will be influenced by whoever makes the best case... It's not an easy job for sure.

Here is my reply.

Good question. In order to put a name on a list, one has to know this name, i.e. recognize it. But I knew much more than 10 names. Actually, this is one of the topics I wanted to write about sometime in details. The whole atmosphere at that time was completely different from what I see around now. May be the place also played some role, but I doubt that its role was decisive. Most of the people around me liked to talk about mathematics, and not only about what they were doing. When some guy in Japan claimed that he proved the Riemann hypothesis, I knew about this the same week. Note that the internet was still in the future, as were e-mails. I had a feeling that I know about everything important going on in mathematics. I always had a little bit more curiosity than others, so I knew also about fields fairly remote from own work.

I do not remember all 10 names on my list (I remember about 7), but 4 winners were included. It was quite easy to guess 3 of them. Everybody would agree that they were the main contenders. I am really proud about guessing the 4th one. Nobody around was talking about him or even mentioned him, and his field is quite far from my own interests. To what extent I understood their work? I studied some work of one winner, knew the statements and had some idea about their proof for another one (later the work of both of them influenced a lot my own work, but mostly indirectly), and very well knew what are the achievements of the third one, why they are important, etc. I knew more or less just the statements of two main results of the 4th one, the one who was difficult to guess – for me. I was able to explain why this or that guy got the medal even to a theoretical physicist (actually did on one occasion). But I wasn’t able to teach a topic course about works of any of the 4.

At the time I never heard any complaints that a medal went to a wrong person. The same about all older awards. There was always a consensus in the mathematical community than all the people who got the medal deserved it. May be somebody else also deserved it too, but there are only 3 or 4 of them each time.

Mathematics is a human activity. This is one of the facts that T. Gowers prefers to ignore. Nobody verifies proofs line by line. Initially, you trust your guts feelings. If you need to use a theorem, you will be forced to study the proof and understand its main ideas. The same is true about the deepness of a result. You do not need to know all the proofs in order to write down a list like my list of 10 most likely winners (next time my list consisted of no more than 5 or 6, all winner were included). It seems that I knew the work of all guessed winners better than Gowers knew the work of 2010 medalists. But even if not, there is a huge difference between a graduate student trying to guess the current year winners, and a Fellow of the London Royal Society, a Fields medalist himself, who is deciding who will get 2010 medals. He should know more.

The job is surely not an easy one now, when it is all about politics. Otherwise it would be very pleasant.

Next post: Guessing who will get Fields medals - Some history and 2014.

Monday, March 25, 2013

Reply to JSE

Previous post: Reply to Timothy Gowers


Here is a reply to a comment by JSE.

I just checked the first version of Green-Tao paper in arXiv (the file is in my computer). The Introduction presents paper as a paper proving a long-standing conjecture about prime numbers. The Erdős' conjecture on arithmetic progressions is not even mentioned. Of course, most of the non-experts read only the introduction.

Your impression could (and, actually, should) be different from that of a layman mathematician: you are an expert in the field. And my claim wasn’t that nobody realized that the Green-Tao paper is not a work about primes at all. I claim only that this is far from being obvious, and a lot of mathematicians thought that it is a work about primes. Primes have a special (and well-deserved) status in mathematics, everything new about primes seems to be much more valuable than some result about a class of subsets of the set of natural numbers.

Then you make a quite interesting claim, and even using ALL CAPS. I must admit that I also do not care now about existence of arbitrarily long arithmetic progression of primes after the spell of Szemerédi's theorem and Gowers’s work about it disappeared. We differ in that you are interested in arithmetic progressions in other sets, despite being not interested in the set of primes. I am not. I am not interested in arithmetic progressions in other groups even more definitely. Pretending for a moment that I still believe in the theory of “Two cultures”, I see such questions as an easy way to turn some conceptual notions (the notions of primes and groups in this case) into a playground for the “second culture” mathematicians, and an opportunity for them to mingle with the ones working in the “first culture”. Another standard way to do this is to ask about “best estimates” or simply the existence of any estimate for an existence theorem. (It is hardly known that most “pure existence” proofs can be transformed into proofs with estimates according to a hardly known result of logician G. Kreisel – known so little that I am going to have some quite difficult time looking for a reference.)

Let us to step back to the source of all these questions about arithmetic progressions, to the theorem of van der Warden. I never thought that its statement is important or interesting. But I found the proof being interesting (in an agreement with the maxim that proofs are more important than theorems). It was the most complicated and powerful use of (iterated) mathematical induction that I saw at time I learned it. I still think that this aspect of the proof is interesting. Of course, the real questions are concerned not with the usual induction but with the transfinite induction. To the best of my knowledge, Martin’s proof of the determinacy of Borel games still holds the place of a purely mathematical theorem (in contrast with advanced set theory) requiring most complicated form of the transfinite induction. Apparently, it is also the only mathematical result which needs the axiom of replacement for its proof (namely, F of ZF, Fraenkel’s axiom of the Zermelo-Fraenkel set theory). This is hardly a mainstream topic nowadays (for either of “cultures”), but for me it is really deep and interesting.


Next post: Hard, soft, and Bott periodicity - Reply to T. Gowers.

Sunday, March 24, 2013

Reply to Timothy Gowers

Previous post: Happy New Year!


Here is a reply to a comment by T. Gowers about my post My affair with Szemerédi-Gowers mathematics.

I agree that we have no way to know what will happen with combinatorics or any other branch of mathematics. From my point of view, your “intermediate possibility” (namely, developing some artificial way of conceptualization) does not qualify as a way to make it “conceptual” (actually, a proper conceptualization cannot be artificial essentially by the definition) and is not an attractive perspective at all. By the way, the use of algebraic geometry as a reference point in this discussion is purely accidental. A lot of other branches of mathematics are conceptual, and in every branch there are more conceptual and less conceptual subbranches. As is well known, even Deligne’s completion of proof of Weil’s conjectures was not conceptual enough for Grothendick.

Let me clarify how I understand the term “conceptual”. A theory is conceptual if most of the difficulties were moved from proofs to definitions (i.e. to concepts), or they are there from the very beginning (which may happen only inside of an already conceptual theory). The definitions may be difficult to digest at the first encounter, but the proofs are straightforward. A very good and elementary example is provided by the modern form of the Stokes theorem. In 19th century we had the fundamental theorem of calculus and 3 theorems, respectively due to Gauss-Ostrogradsky, Green, and Stokes, dealing with more complicated integrals. Now we have only one theorem, usually called Stokes theorem, valid for all dimensions. After all definitions are put in place, its proof is trivial. M. Spivak nicely explains this in the preface to his classics, “Calculus on manifolds”. (I would like to note in parentheses that if the algebraic concepts are chosen more carefully than in his book, then the whole theory would be noticeably simpler and the definitions would be easier to digest. Unfortunately, such approaches did not found their way into the textbooks yet.) So, in this case the conceptualization leads to trivial proofs and much more general results. Moreover, its opens the way to further developments: the de Rham cohomology turns into the most natural next thing to study.

I think that for every branch of mathematics and every theory such a conceptualization eventually turns into a necessity: without it the subject grows into a huge body of interrelated and cross-referenced results and eventually falls apart into many to a big extent isolated problems. I even suspect that your desire to have a sort of at least semi-intelligent version of MathSciNet (if I remember correctly, you wrote about this in your GAFA 2000 paper) was largely motivated by the difficulty to work in such a field.

This naturally leads us to one more scenario (the 3rd one, if we lump together your “intermediate” scenario with the failure to develop a conceptual framework) for a not conceptualized theory: it will die slowly. This happens from time to time: a lot of branches of analysis which flourished at the beginning of 20th century are forgotten by now. There is even a recent example involving a quintessentially conceptual part of mathematics and the first Abel prize winner, J.-P. Serre. As H. Weyl stressed in his address to 1954 Congress, the Fields medal was awarded to Serre for his spectacular work (his thesis) on spectral sequences and their applications to the homotopy groups, especially to the homotopy groups of spheres (the problem of computing these groups was at the center of attention of leading topologists for about 15 years without any serious successes). Serre did not push his method to its limits; he already started to move to first complex manifolds, then algebraic geometry, and eventually to the algebraic number theory. Others did, and this quickly resulted in a highly chaotic collection of computations with the Leray-Serre spectral sequences plus some elementary consideration. Assuming the main properties of these spectral sequences (which can be used without any real understanding of spectral sequences), the theory lacked any conceptual framework. Serre lost interest even in the results, not just in proofs. This theory is long dead. The surviving part is based on further conceptual developments: the Adams spectral sequence, then the Adams-Novikov spectral sequence. This line of development is alive and well till now.

Another example of a dead theory is the Euclid geometry.

In view of all this, it seems that there are only the following options for a mathematical theory or a branch of mathematics: to continuously develop proper conceptualizations or to die and have its results relegated to the books for gifted students (undergraduate students in the modern US, high school students in some other places and times).


Next post: Reply to JSE.