About the title

About the title

I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).

The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.



I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.



Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.

Friday, August 23, 2013

The role of the problems

Previous post: Is algebraic geometry applied or pure mathematics?


From a comment by Tamas Gabal:

“I also agree that many 'applied' areas of mathematics do not have famous open problems, unlike 'pure' areas. In 'applied' areas it is more difficult to make bold conjectures, because the questions are often imprecise. They are trying to explain certain phenomena and most efforts are devoted to incremental improvements of algorithms, estimates, etc.”

The obsession of modern pure mathematicians with famous problems is not quite healthy. The proper role of such problems is to serve as a testing ground for new ideas, concepts, and theories. The reasons for this obsession appear to be purely social and geopolitical. The mathematical Olympiads turned in a sort of professional sport, where the winner increases the prestige of their country. Fields medals, Clay’s millions, zillions of other prizes increase the social role of problem solving. The reason is obvious: a solution of a long standing problem is clearly an achievement. In contrast, a new theory may prove its significance in ten year (and this will disqualify its author for the Fields medal), but may prove this only after 50 years or even more, like Grassmann’s theory. By the way, this is the main difficulity in evaluating J. Lurie's work.

Poincaré wrote that problems with a “yes/no” answer are not really interesting. The vague problems of the type of explaining certain phenomena are the most interesting ones and most likely to lead to some genuinely new mathematics. In contrast with applied mathematics, an incremental progress is rare in the pure mathematics, and is not valued much. I am aware that many analysts will object (say, T. Tao in his initial incarnation as an expert in harmonic analysis), and may say that replacing 15/16 by 16/17 in some estimate (the fractions are invented by me on the spot) is a huge progress comparable with solving one of the Clay problems. Still, I hold a different opinion. With these fractions the goal is certainly to get the constant 1, and no matter how close to 1 you will get, you will still need a radically new idea to get 1.

It is interesting to note that mathematicians who selected the Clay problems were aware of the fact that “yes/no” answer is not always the desired one. They included into description of prize a clause to the effect that a counterexample (a “no” answer) for a conjecture included in the list does not automatically qualifies for the prize. The conjectures are such that a “yes” answer always qualifies, but a “no” answer is interesting only if it really clarifies the situation.


Next post: Graduate level textbooks I.

Is algebraic geometry applied or pure mathematics?

Previous post: About some ways to work in mathematics.

From a comment by Tamas Gabal:

“This division into 'pure' and 'applied' mathematics is real, as it is understood and awkwardly enforced by the math departments in the US. How is algebraic geometry not 'applied' when so much of its development is motivated by theoretical physics?”

Of course, the division into the pure and applied mathematics is real. They are two rather different types of human activity in every respect (including the role of the “problems”). Contrary to what you think, it is hardly reflected in the structure of US universities. Both pure and applied mathematics belong to the same department (with few exceptions). This allows the university administrators to freely convert positions in the pure mathematics into positions in applied mathematics. They never do the opposite conversion.

Algebraic geometry is not applied. You will be not able to fool by such statement any dean or provost. I am surprised that it is, apparently, not obvious anymore. Here are some reasons.

1. First of all, the part of theoretical physics in which algebraic geometry is relevant is itself as pure as pure mathematics. It deals mostly with theories which cannot be tested experimentally: the required conditions existed only in the first 3 second after the Big Bang and, probably, only much earlier. The motivation for these theories is more or less purely esthetical, like in pure mathematics. Clearly, these theories are of no use in the real life.

2. Being motivated by outside questions does not turn any branch of mathematics into an applied branch. Almost all branches of mathematics started from some questions outside of it. To qualify as applied, a theory should be really applied to some outside problems. By the way, this is the main problem with what administrators call “applied mathematics”. While all “applied mathematicians” refer to applications as a motivation of their work, their results are nearly always useless. Moreover, usually they are predictably useless. In contrast, pure mathematicians cannot justify their research by applications, but their results eventually turn out to be very useful.

3. Algebraic geometry was developed as a part of pure mathematics with no outside motivation. What happens when it interacts with theoretical physics? The standard pattern over the last 30-40 years is the following. Physicists use they standard mode of reasoning to state, usually not precisely, some mathematical conjectures. The main tool of physicists not available to mathematicians is the Feynman integral. Then mathematicians prove these conjectures using already available tools from pure mathematics, and they do this surprisingly fast. Sometimes a proof is obtained before the conjecture is published. About 25 years ago I.M. Singer (of the Atiyah-Singer theorem fame) wrote an outline of what, he hoped, will result from the interaction of mathematics with the theoretical physics in the near future. In one phrase, one may say that he hoped for infinitely-dimensional geometry as nice and efficient as the finitely-dimensional geometry is. This would be a sort of replacement for the Feynman integral. Well, his hopes did not materialize. The conjectures suggested by physicists are still being proved by finitely-dimensional means; physics did not suggested any way even to make precise what kind of such infinitely-dimensional geometry is desired, and there is no interesting or useful genuinely infinitely-dimensional geometry. By “genuinely” I mean “not being essentially/morally equivalent to a unified sequence of finitely dimensional theories or theorems”.

To sum up, nothing dramatic resulted from the interaction of algebraic geometry and theoretical physics. I don not mean that nothing good resulted. In mathematics this interaction resulted in some quite interesting theorems and theories. It did not change the landscape completely, as Grothendieck’s ideas did, but it made it richer. As of physics, the question is still open. More and more people are taking the position that these untestable theories are completely irrelevant to the real world (and hence are not physics at all). There are no applications, and hence the whole activity cannot be considered as an applied one.


Next post: The role of the problems.

Wednesday, August 21, 2013

About some ways to work in mathematics

Previous post: New ideas.


From a comment by Tamas Gabal:

“...you mentioned that the problems are often solved by methods developed for completely different purposes. This can be interpreted in two different ways. First - if you work on some problem, you should constantly look for ideas that may seem unrelated to apply to your problem. Second - focus entirely on the development of your ideas and look for problems that may seem unrelated to apply your ideas. I personally lean toward the latter, but your advice may be different.”

Both ways to work are possible. There are also other ways: for example, not to have any specific problem to solve. One should not suggest one way or another as the right one. You should work in the way which suits you more. Otherwise you are unlikely to succeed and you will miss most of the joy.

Actually, my statement did not suggest either of these approaches. Sometimes a problem is solved by discovering a connection between previously unrelated fields, and sometimes a problem is solved entirely within the context in was posed originally. You never know. And how one constantly looks for outside ideas? A useful idea may be hidden deep inside of some theory and invisible otherwise. Nobody studies the whole mathematics in the hope that this will help to solve a specific problem.

I think that it would be better not to think in terms of this alternative at all. You have a problem to solve, you work on it in all ways you can (most of approaches will fail – this is the unpleasant part of the profession), and that’s it. The advice would be to follow development in a sufficiently big chunk of mathematics. Do not limit yourself by, say, algebra (if your field is algebra). The division of mathematics into geometry, algebra, and analysis is quite outdated. Then you may suddenly learn about some idea which will help you.

Also, you do not need to have a problem to begin with. Usually a mathematician starts with a precisely stated problem, suggested by the Ph.D. advisor. But even this is not necessary.

My own way to work is very close to the way M. Atiyah described as his way of work in an interview published in “The Mathematical Intelligencer” in early 1980ies (of course, I do not claim that the achievements are comparable). This interview is highly recommended; it is also highly recommended by T. Gowers. I believe that I explained how I work to a friend (who asked a question similar to yours one) before I read this interview. Anyhow, I described my way to him as follows. I do not work on any specific problem, except of my own working conjectures. I am swimming in mathematics like in a sea or river and look around for interesting things (the river of mathematics carries much more stuff than a real river). Technically this means that I follow various sources informing about the current developments, including talks, I read papers, both current and old ones, and I learn some stuff from textbooks. An advanced graduate level textbook not in my area is my favorite type of books in mathematics. I am doing this because this is that I like to do, not because I want to solve a problem or need to publish 12 papers during next 3 years. From time to time I see something to which, I feel, I can contribute. From time to time I see some connections which were not noticed before.

My work in “my area” started in the following way. I was familiar with a very new theory, which I learned from the only available (till about 2-3 years ago!) source: a French seminar devoted to its exposition. The author never wrote down any details. Then a famous mathematician visited us and gave a talk about a new (not published yet) remarkable theorem of another mathematician (it seems to me that it is good when people speak not only about their own work). The proof used at a key point an outside “Theorem A” by still another mathematicians. The speaker outlined its proof in few phrases (most speakers would just quote Theorem A, so I was really lucky). Very soon I realized (may be the same day or even during the talk) that the above new theory allows at least partially transplant Theorem A in a completely different context following the outline from the talk. But there is a problem: the conclusion of Theorem A tells that you are either in a very nice generic situation, or in an exceptional situation. In my context there are obvious exceptions, but I had no idea if there are non-obvious exceptions, and how to approach any exceptions. So, I did not even started to work on any details. 2-3 years later a preprint arrived in the mail. It was sent to me by reasons not related at all with the above story; actually, I did not tell anybody about these ideas. The preprint contained exactly what I needed: a proof that there are only obvious exceptional cases (not mentioning Theorem A). Within a month I had a proof of an analogue of Theorem A (this proof was quickly replaced by a better one and I am not able to reproduce it). Naturally, I started to look around: what else can be done in my context. As it turned out, a lot. And the theory I learned from that French seminar is not needed for many interesting things.

Could all this be planned in advance following some advice of some experienced person? Certainly, not. But if you do like this style, my advice would be: work this way. You will be not able to predict when you will discover something interesting, but you will discover. If this style does not appeal to you, do not try.

Note that this style is opposite to the Gowers’s one. He starts with a problem. His belief that mathematics can be done by computers is based on a not quite explicit assumption that his is the only way, and he keeps a place for humans in his not-very-science-fiction at least at the beginning: humans are needed as the source of problems for computers. I don’t see any motivation for humans to supply computers with mathematical problems, but, apparently, Gowers does. More importantly, a part of mathematics which admits solutions of its problems by computers will very soon die out. Since the proofs will be produced and verified by computers, humans will have no source of inspiration (which is the proofs).


Next post: Is algebraic geometry applied or pure mathematics?

Tuesday, August 20, 2013

New ideas

Previous post: Did J. Lurie solved any big problem?


Tamas Gabal asked:

“Dear Sowa, in your own experience, how often genuinely new ideas appear in an active field of mathematics and how long are the periods in between when people digest and build theories around those ideas? What are the dynamics of progress in mathematics, and how various areas are different in this regard?”

Here is my partial reply.


This question requires a book-length answer; especially because it is not very precisely formulated. I will try to be shorter. :- )

First of all, what should be considered as genuinely new ideas? How new and original they are required to be? Even for such a fundamental notion as an integral there are different choices. At one end, there is only one new idea related to it, which predates the discovery of the mathematics itself. Namely, it is idea of the area. If we lower our requirements a little, there will be 3 other ideas, associated with the works or Archimedes, Lebesque, and hardly known by now works of Danjoy, Perron, and others. The Riemann integral is just a modern version of Archimedes and other Ancient Greek mathematician. The Danjoy integral generalizes the Lebesgue one and has some desirable properties which the Lebesgue integral has not. But it turned out to be a dead end without any applications to topics of general interest. I will stop my survey of the theory of integration here: there are many other contributions. The point is that if we lower our requirements further, then we have much more “genuinely new” ideas.

It would be much better to speak not about some vague levels of originality, but about areas of mathematics. Some ideas are new and important inside the theory of integration, but are of almost no interest for outsiders.

You asked about my personal experience. Are you asking about what my general knowledge tells me, or what happened in my own mathematical life? Even if you are asking about the latter, it is very hard to answer. At the highest level I contributed no new ideas. One may even say that nobody after Grothendieck did (although I personally believe that 2 or 3 other mathematicians did), so I am not ashamed. I am not inclined to classify my work as analysis, algebra, geometry, topology, etc. Formally, I am assigned to one of these boxes; but this only hurts me and my research. Still, there is a fairly narrow subfield of mathematics to which I contributed, probably, 2 or 3 ideas. According to A. Weil, if a mathematician had contributed 1 new idea, he is really exceptional; most of mathematicians do not contribute any new ideas. If a mathematician contributed 2 or 3 new ideas, he or she would be a great mathematician, according to A. Weil. By this reason, I wrote “2 or 3” not without a great hesitation. I do not overestimate myself. I wanted to illustrate what happens if the area is sufficiently narrow, but not necessarily to the limit. The area I am taking about can be very naturally partitioned further. I worked in other fields too, and I hope that these papers also contain a couple of new ideas. For sure, they are of a level lower than the one A. Weil had in mind.

On one hand side this personal example shows another extreme way to count the frequency of new ideas. I don’t think that it would be interesting to lower the level further. Many papers and even small lemmas contain some little new ideas (still, much more do not). On the other side, this is important on a personal level. Mathematics is a very difficult profession, and it lost almost all its appeal as a career due to the changes of the universities (at least in the West, especially in the US). It is better to know in advance what kind of internal reward you may get out of it.

As of the timeframe, I think that a new idea is usually understood and used within a year (one has to keep in mind that mathematics is a very slow art) by few followers of the discoverer, often by his or her students or personal friends. Here “few” is something like 2-5 mathematicians. The mathematical community needs about 10 years to digest something new, sometimes it needs much more time. It seems that all this is independent of the level of the contribution. The less fundamental ideas are of interest to fewer people. So they are digested more slowly, despite being easier.

I don’t have much to say about the dynamics (what is the dynamics here?) of progress in mathematics. The past is discussed in many books about history of mathematics; despite I don’t know any which I could recommend without reservations. The only exception is the historical notes at the ends of N. Bourbaki books (they are translated into English and published as a separate book by Springer). A good starting point to read about 20th century is the article by M. Atiyah, “Mathematics in the 20th century”, American Mathematical Monthly, August/September 2001, p. 654 – 666. I will not try to predict the future. If you predict it correctly, nobody will believe you; if not, there is no point. Mathematicians usually try to shape the future by posing problems, but this usually fails even if the problem is solved, because it is solved by tools developed for other purposes. And the future of mathematics is determined by tools. A solution of a really difficult problem often kills an area of research, at least temporarily (for decades minimum).

My predictions for the pure mathematics are rather bleak, but they are based on observing the basic trends in the society, and not on the internal situation in mathematics. There is an internal problem in mathematics pointed out by C. Smorinsky in the 1980ies. The very fast development of mathematics in the preceding decades created many large gaps in the mathematical literature. Some theories lack readable expositions, some theorem are universally accepted but appear to have big gaps in their proofs. C. Smorinsky predicted that mathematicians will turn to expository work and will clear this mess. He also predicted more attention to the history of mathematics. A lot of ideas are hard to understand without knowing why and how they were developed. His predictions did not materialize yet. The expository work is often more difficult than the so-called “original research”, but it is hardly rewarded.


Next post: About some ways to work in mathematics.

Sunday, August 4, 2013

Did J. Lurie solved any big problem?

Previous post: Guessing who will get Fields medals - Some history and 2014.

Tamas Gabal asked the following question.

I heard a criticism of Lurie's work, that it does not contain startling new ideas, complete solutions of important problems, even new conjectures. That he is simply rewriting old ideas in a new language. I am very far from this area, and I find it a little disturbing that only the ultimate experts speak highly of his work. Even people in related areas can not usually give specific examples of his greatness. I understand that his objectives may be much more long-term, but I would still like to hear some response to these criticisms.

Short answer: I don't care. Here is a long answer.

Well, this is the reason why my opinion about Lurie is somewhat conditional. As I already said, if an impartial committee confirms the significance of Lurie’s work, it will remove my doubts and, very likely, will stimulate me to study his work in depth. It is much harder to predict what will be the influence of the actual committee. Perhaps, I will try to learn his work in any case. If he will not get the medal, then in the hope to make sure that the committee is wrong.

I planned to discuss many peculiarities of mathematical prizes in another post, but one of these peculiarities ought to be mentioned now. Most of mathematical prizes go to people who solved some “important problems”. In fact, most of them go to people who made the last step in solving a problem. There is recent and famous example at hand: the Clay $1,000,000.00 prize was awarded to Perelman alone. But the method was designed by R. Hamilton, who did a huge amount of work, but wasn’t able to made “the last step”. Perhaps, just because of age. As Perelman said to a Russian news agency, he declined the prize because in his opinion Hamilton’s work is no less important than his own, and Hamilton deserves the prize no less than him. It seems that this reason still not known widely enough. To the best of my knowledge, it was not included in any press-release of the Clay Institute. The Clay Institute scheduled the award ceremony like they knew nothing, and then held the ceremony as planned. Except Grisha Perelman wasn’t present, and he did not accept the prize in any sense.

So, the prizes go to mathematicians who did the last step in the solution of a recognized problem. The mathematicians building the theories on which these solutions are based almost never get Fields medals. Their chances are more significant when prize is a prize for the life-time contribution (as is the case with the Abel prize). There are few exceptions.

First of all, A. Grothendieck is an exception. He proved part of the Weil conjectures, but not the most important one (later proved by P. Deligne). One of the Weil conjectures (the basic one) was independently proved by B. Dwork, by a completely different and independent method, and published earlier (by the way, this is fairly accessible and extremely beautiful piece of work). The report of J. Dieudonne at the 1966 Congress outlines a huge theory, to a big extent still not written down then. It includes some theorems, like the Grothendieck-Riemann-Roch theorem, but: (i) GRR theorem does not solve any established problem, it is a radically new type of a statement; (ii) Grothendieck did not published his proof, being of the opinion that the proof is not good enough (an exposition was published by Borel and Serre); (iii) it is just a byproduct of his new way of thinking.

D. Quillen (Fields medal 1978) did solve some problems, but his main achievement is a solution of a very unusual problem: to give a good definition of so-called higher algebraic K-functors. It is a theory. Moreover, there are other solutions. Eventually, it turns out that they all provide equivalent definitions. But Quillen’s definitions (actually, he suggested two) are much better than others.

So, I do not care much if Lurie solved some “important problems” or not. Moreover, in the current situation I rather prefer that he did not solved any well-known problems, if he will get a Fields medal. The contrast with the Hungarian combinatorics, which is concentrated on statements and problems, will make the mathematics healthier.

Problems are very misleading. Often they achieve their status not because they are really important, but because a prize was associated with them (Fermat Last Theorem), or they were posed by a famous mathematicians. An example of the last situation is nothing else but the Poincaré Conjecture – in fact, Poincaré did not stated it as a conjecture, he just mentioned that “it would be interesting to know the answer to the following question”. It is not particularly important by itself. It claims that one difficult to verify property (being homeomorphic to a 3-sphere) is equivalent to another difficult to verify property (having trivial fundamental group). In practice, if you know that the fundamental group is trivial, you know also that your manifold is a 3-sphere.

Next post: New ideas.