About the title

About the title

I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).

The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.



I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.



Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.
Showing posts with label Atiyah. Show all posts
Showing posts with label Atiyah. Show all posts

Wednesday, August 21, 2013

About some ways to work in mathematics

Previous post: New ideas.


From a comment by Tamas Gabal:

“...you mentioned that the problems are often solved by methods developed for completely different purposes. This can be interpreted in two different ways. First - if you work on some problem, you should constantly look for ideas that may seem unrelated to apply to your problem. Second - focus entirely on the development of your ideas and look for problems that may seem unrelated to apply your ideas. I personally lean toward the latter, but your advice may be different.”

Both ways to work are possible. There are also other ways: for example, not to have any specific problem to solve. One should not suggest one way or another as the right one. You should work in the way which suits you more. Otherwise you are unlikely to succeed and you will miss most of the joy.

Actually, my statement did not suggest either of these approaches. Sometimes a problem is solved by discovering a connection between previously unrelated fields, and sometimes a problem is solved entirely within the context in was posed originally. You never know. And how one constantly looks for outside ideas? A useful idea may be hidden deep inside of some theory and invisible otherwise. Nobody studies the whole mathematics in the hope that this will help to solve a specific problem.

I think that it would be better not to think in terms of this alternative at all. You have a problem to solve, you work on it in all ways you can (most of approaches will fail – this is the unpleasant part of the profession), and that’s it. The advice would be to follow development in a sufficiently big chunk of mathematics. Do not limit yourself by, say, algebra (if your field is algebra). The division of mathematics into geometry, algebra, and analysis is quite outdated. Then you may suddenly learn about some idea which will help you.

Also, you do not need to have a problem to begin with. Usually a mathematician starts with a precisely stated problem, suggested by the Ph.D. advisor. But even this is not necessary.

My own way to work is very close to the way M. Atiyah described as his way of work in an interview published in “The Mathematical Intelligencer” in early 1980ies (of course, I do not claim that the achievements are comparable). This interview is highly recommended; it is also highly recommended by T. Gowers. I believe that I explained how I work to a friend (who asked a question similar to yours one) before I read this interview. Anyhow, I described my way to him as follows. I do not work on any specific problem, except of my own working conjectures. I am swimming in mathematics like in a sea or river and look around for interesting things (the river of mathematics carries much more stuff than a real river). Technically this means that I follow various sources informing about the current developments, including talks, I read papers, both current and old ones, and I learn some stuff from textbooks. An advanced graduate level textbook not in my area is my favorite type of books in mathematics. I am doing this because this is that I like to do, not because I want to solve a problem or need to publish 12 papers during next 3 years. From time to time I see something to which, I feel, I can contribute. From time to time I see some connections which were not noticed before.

My work in “my area” started in the following way. I was familiar with a very new theory, which I learned from the only available (till about 2-3 years ago!) source: a French seminar devoted to its exposition. The author never wrote down any details. Then a famous mathematician visited us and gave a talk about a new (not published yet) remarkable theorem of another mathematician (it seems to me that it is good when people speak not only about their own work). The proof used at a key point an outside “Theorem A” by still another mathematicians. The speaker outlined its proof in few phrases (most speakers would just quote Theorem A, so I was really lucky). Very soon I realized (may be the same day or even during the talk) that the above new theory allows at least partially transplant Theorem A in a completely different context following the outline from the talk. But there is a problem: the conclusion of Theorem A tells that you are either in a very nice generic situation, or in an exceptional situation. In my context there are obvious exceptions, but I had no idea if there are non-obvious exceptions, and how to approach any exceptions. So, I did not even started to work on any details. 2-3 years later a preprint arrived in the mail. It was sent to me by reasons not related at all with the above story; actually, I did not tell anybody about these ideas. The preprint contained exactly what I needed: a proof that there are only obvious exceptional cases (not mentioning Theorem A). Within a month I had a proof of an analogue of Theorem A (this proof was quickly replaced by a better one and I am not able to reproduce it). Naturally, I started to look around: what else can be done in my context. As it turned out, a lot. And the theory I learned from that French seminar is not needed for many interesting things.

Could all this be planned in advance following some advice of some experienced person? Certainly, not. But if you do like this style, my advice would be: work this way. You will be not able to predict when you will discover something interesting, but you will discover. If this style does not appeal to you, do not try.

Note that this style is opposite to the Gowers’s one. He starts with a problem. His belief that mathematics can be done by computers is based on a not quite explicit assumption that his is the only way, and he keeps a place for humans in his not-very-science-fiction at least at the beginning: humans are needed as the source of problems for computers. I don’t see any motivation for humans to supply computers with mathematical problems, but, apparently, Gowers does. More importantly, a part of mathematics which admits solutions of its problems by computers will very soon die out. Since the proofs will be produced and verified by computers, humans will have no source of inspiration (which is the proofs).


Next post: Is algebraic geometry applied or pure mathematics?

Tuesday, August 20, 2013

New ideas

Previous post: Did J. Lurie solved any big problem?


Tamas Gabal asked:

“Dear Sowa, in your own experience, how often genuinely new ideas appear in an active field of mathematics and how long are the periods in between when people digest and build theories around those ideas? What are the dynamics of progress in mathematics, and how various areas are different in this regard?”

Here is my partial reply.


This question requires a book-length answer; especially because it is not very precisely formulated. I will try to be shorter. :- )

First of all, what should be considered as genuinely new ideas? How new and original they are required to be? Even for such a fundamental notion as an integral there are different choices. At one end, there is only one new idea related to it, which predates the discovery of the mathematics itself. Namely, it is idea of the area. If we lower our requirements a little, there will be 3 other ideas, associated with the works or Archimedes, Lebesque, and hardly known by now works of Danjoy, Perron, and others. The Riemann integral is just a modern version of Archimedes and other Ancient Greek mathematician. The Danjoy integral generalizes the Lebesgue one and has some desirable properties which the Lebesgue integral has not. But it turned out to be a dead end without any applications to topics of general interest. I will stop my survey of the theory of integration here: there are many other contributions. The point is that if we lower our requirements further, then we have much more “genuinely new” ideas.

It would be much better to speak not about some vague levels of originality, but about areas of mathematics. Some ideas are new and important inside the theory of integration, but are of almost no interest for outsiders.

You asked about my personal experience. Are you asking about what my general knowledge tells me, or what happened in my own mathematical life? Even if you are asking about the latter, it is very hard to answer. At the highest level I contributed no new ideas. One may even say that nobody after Grothendieck did (although I personally believe that 2 or 3 other mathematicians did), so I am not ashamed. I am not inclined to classify my work as analysis, algebra, geometry, topology, etc. Formally, I am assigned to one of these boxes; but this only hurts me and my research. Still, there is a fairly narrow subfield of mathematics to which I contributed, probably, 2 or 3 ideas. According to A. Weil, if a mathematician had contributed 1 new idea, he is really exceptional; most of mathematicians do not contribute any new ideas. If a mathematician contributed 2 or 3 new ideas, he or she would be a great mathematician, according to A. Weil. By this reason, I wrote “2 or 3” not without a great hesitation. I do not overestimate myself. I wanted to illustrate what happens if the area is sufficiently narrow, but not necessarily to the limit. The area I am taking about can be very naturally partitioned further. I worked in other fields too, and I hope that these papers also contain a couple of new ideas. For sure, they are of a level lower than the one A. Weil had in mind.

On one hand side this personal example shows another extreme way to count the frequency of new ideas. I don’t think that it would be interesting to lower the level further. Many papers and even small lemmas contain some little new ideas (still, much more do not). On the other side, this is important on a personal level. Mathematics is a very difficult profession, and it lost almost all its appeal as a career due to the changes of the universities (at least in the West, especially in the US). It is better to know in advance what kind of internal reward you may get out of it.

As of the timeframe, I think that a new idea is usually understood and used within a year (one has to keep in mind that mathematics is a very slow art) by few followers of the discoverer, often by his or her students or personal friends. Here “few” is something like 2-5 mathematicians. The mathematical community needs about 10 years to digest something new, sometimes it needs much more time. It seems that all this is independent of the level of the contribution. The less fundamental ideas are of interest to fewer people. So they are digested more slowly, despite being easier.

I don’t have much to say about the dynamics (what is the dynamics here?) of progress in mathematics. The past is discussed in many books about history of mathematics; despite I don’t know any which I could recommend without reservations. The only exception is the historical notes at the ends of N. Bourbaki books (they are translated into English and published as a separate book by Springer). A good starting point to read about 20th century is the article by M. Atiyah, “Mathematics in the 20th century”, American Mathematical Monthly, August/September 2001, p. 654 – 666. I will not try to predict the future. If you predict it correctly, nobody will believe you; if not, there is no point. Mathematicians usually try to shape the future by posing problems, but this usually fails even if the problem is solved, because it is solved by tools developed for other purposes. And the future of mathematics is determined by tools. A solution of a really difficult problem often kills an area of research, at least temporarily (for decades minimum).

My predictions for the pure mathematics are rather bleak, but they are based on observing the basic trends in the society, and not on the internal situation in mathematics. There is an internal problem in mathematics pointed out by C. Smorinsky in the 1980ies. The very fast development of mathematics in the preceding decades created many large gaps in the mathematical literature. Some theories lack readable expositions, some theorem are universally accepted but appear to have big gaps in their proofs. C. Smorinsky predicted that mathematicians will turn to expository work and will clear this mess. He also predicted more attention to the history of mathematics. A lot of ideas are hard to understand without knowing why and how they were developed. His predictions did not materialize yet. The expository work is often more difficult than the so-called “original research”, but it is hardly rewarded.


Next post: About some ways to work in mathematics.

Monday, March 25, 2013

Hard, soft, and Bott periodicity - Reply to T. Gowers

Previous post: Reply to JSE.

This is a reply to a comment of T. Gowers.

Yesterday I left remarks about “hard” arguments and Bott periodicity without any comments. Here are few.

First, the meaning of the word "hard" varies from person to person. There is a fairly precise definition in analysis, due to Hardy. Still, I fail to use it for classification of, say, Lars Ahlfors work: is it hard or soft? I was told once that it is not "hard analysis", but wasn't told anything meaningful why. For me, Ahlfors is the greatest analyst of the last century (let us assume that the 20th century started around 1910, at least - in order to avoid hardly relevant comparisons).

Notice that the terminology is already fairly misleading: the opposite of “hard” is not “easy” (despite the hard analysis is assumed to be difficult and hard to work in). It is “soft”. What dichotomy is understood here? Clearly, finding the right definitions is not an easy thing; more often than not it is very difficult. The defined objects may turn out to either “hard” or “soft” depending on what we wanted. For definitions I see only one meaningful interpretation: objects are hard if they are rigid (like Platonic solids), and soft if they can be easily deformed (and the space of deformations is highly dimensional) without losing their key characteristics. It seems that “hard” theories are very often the ones dealing with “soft” objects.

But I suspect that the people using the hard-soft terminology will disagree with me. At the same time in the conceptual mathematics there is no such dichotomy at all and it is impossible to acquire any experience in using it.

The example of the Bott periodicity theorem is a good testing ground. The situation with the Bott periodicity is more or less opposite to what you wrote about it. First of all, it is not a black box to be used without opening it.

The first proof was based on the Morse theory for infinitely dimensional space of curves in some classical manifolds. (A nice idea of Morse reduces the problem at once to the finitely dimensional situation.) This is result is crucial for topological K-theory; it is build into its structure. But I never saw any detailed exposition when the original theorem of Bott was used as black box for developing the topological K-theory. The theorem seems to be too weak for this, it provides for one point space the result needed for a wide class of topological spaces. Probably, the machinery of algebraic topology allows deducing this particular result just from the result for one-point space, but this definitely cannot be done without reworking the Bott theorem in order to get a more explicit result first. Atiyah in his book uses another proof (due to him and Bott), which has the advantage of being “elementary” and giving the result for all reasonable spaces without any intermediaries, and the disadvantage of being the most obscure one. Later on, the K-theory (and, hence, the Bott periodicity) were used in order to prove the Atiyah-Singer index theorem. The index theorem has an advanced version, the index theorem for families (of elliptic operators).

In fact, really useful theorem is the index theorem for families, not the original index theorem. After the second proof of the index theorem was found, which imitated Grothendieck proof of the Grothendieck-Riemann-Roch theorem and allowed to prove the index theorem for families (the first one does not), Atiyah used it to give a new proof of the Bott periodicity. One may suspect a circular argument here, but there are none. A carefully excised fragment of this proof does not depend on the Bott periodicity, and if combined with an algebraic idea due to Atiyah, led to a new proof of Bott’s periodicity. This proof turned out to be the most important one. The subject of analytic K-theory is to a big extent an attempt to use the ideas of this proof in as general situation as possible, and to apply the results. The main point here that people working in analytic K-theory are not using Bott periodicity as a black box; they are thinking about what is really inside this box. By now we have at least 8 different (substantially different) proofs of Bott periodicity, and the progress in the questions related to the Bott periodicity usually requires rethinking the theorem and its proof, not using it as a black box. Perhaps, because of all this, people prefer to speak about the phenomenon of Bott periodicity and not about Bott’s theorem.



Next post: To appear.