About the title

About the title

I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).

The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.



I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.



Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.

Friday, August 23, 2013

The role of the problems

Previous post: Is algebraic geometry applied or pure mathematics?


From a comment by Tamas Gabal:

“I also agree that many 'applied' areas of mathematics do not have famous open problems, unlike 'pure' areas. In 'applied' areas it is more difficult to make bold conjectures, because the questions are often imprecise. They are trying to explain certain phenomena and most efforts are devoted to incremental improvements of algorithms, estimates, etc.”

The obsession of modern pure mathematicians with famous problems is not quite healthy. The proper role of such problems is to serve as a testing ground for new ideas, concepts, and theories. The reasons for this obsession appear to be purely social and geopolitical. The mathematical Olympiads turned in a sort of professional sport, where the winner increases the prestige of their country. Fields medals, Clay’s millions, zillions of other prizes increase the social role of problem solving. The reason is obvious: a solution of a long standing problem is clearly an achievement. In contrast, a new theory may prove its significance in ten year (and this will disqualify its author for the Fields medal), but may prove this only after 50 years or even more, like Grassmann’s theory. By the way, this is the main difficulity in evaluating J. Lurie's work.

Poincaré wrote that problems with a “yes/no” answer are not really interesting. The vague problems of the type of explaining certain phenomena are the most interesting ones and most likely to lead to some genuinely new mathematics. In contrast with applied mathematics, an incremental progress is rare in the pure mathematics, and is not valued much. I am aware that many analysts will object (say, T. Tao in his initial incarnation as an expert in harmonic analysis), and may say that replacing 15/16 by 16/17 in some estimate (the fractions are invented by me on the spot) is a huge progress comparable with solving one of the Clay problems. Still, I hold a different opinion. With these fractions the goal is certainly to get the constant 1, and no matter how close to 1 you will get, you will still need a radically new idea to get 1.

It is interesting to note that mathematicians who selected the Clay problems were aware of the fact that “yes/no” answer is not always the desired one. They included into description of prize a clause to the effect that a counterexample (a “no” answer) for a conjecture included in the list does not automatically qualifies for the prize. The conjectures are such that a “yes” answer always qualifies, but a “no” answer is interesting only if it really clarifies the situation.


Next post: Graduate level textbooks I.

Is algebraic geometry applied or pure mathematics?

Previous post: About some ways to work in mathematics.

From a comment by Tamas Gabal:

“This division into 'pure' and 'applied' mathematics is real, as it is understood and awkwardly enforced by the math departments in the US. How is algebraic geometry not 'applied' when so much of its development is motivated by theoretical physics?”

Of course, the division into the pure and applied mathematics is real. They are two rather different types of human activity in every respect (including the role of the “problems”). Contrary to what you think, it is hardly reflected in the structure of US universities. Both pure and applied mathematics belong to the same department (with few exceptions). This allows the university administrators to freely convert positions in the pure mathematics into positions in applied mathematics. They never do the opposite conversion.

Algebraic geometry is not applied. You will be not able to fool by such statement any dean or provost. I am surprised that it is, apparently, not obvious anymore. Here are some reasons.

1. First of all, the part of theoretical physics in which algebraic geometry is relevant is itself as pure as pure mathematics. It deals mostly with theories which cannot be tested experimentally: the required conditions existed only in the first 3 second after the Big Bang and, probably, only much earlier. The motivation for these theories is more or less purely esthetical, like in pure mathematics. Clearly, these theories are of no use in the real life.

2. Being motivated by outside questions does not turn any branch of mathematics into an applied branch. Almost all branches of mathematics started from some questions outside of it. To qualify as applied, a theory should be really applied to some outside problems. By the way, this is the main problem with what administrators call “applied mathematics”. While all “applied mathematicians” refer to applications as a motivation of their work, their results are nearly always useless. Moreover, usually they are predictably useless. In contrast, pure mathematicians cannot justify their research by applications, but their results eventually turn out to be very useful.

3. Algebraic geometry was developed as a part of pure mathematics with no outside motivation. What happens when it interacts with theoretical physics? The standard pattern over the last 30-40 years is the following. Physicists use they standard mode of reasoning to state, usually not precisely, some mathematical conjectures. The main tool of physicists not available to mathematicians is the Feynman integral. Then mathematicians prove these conjectures using already available tools from pure mathematics, and they do this surprisingly fast. Sometimes a proof is obtained before the conjecture is published. About 25 years ago I.M. Singer (of the Atiyah-Singer theorem fame) wrote an outline of what, he hoped, will result from the interaction of mathematics with the theoretical physics in the near future. In one phrase, one may say that he hoped for infinitely-dimensional geometry as nice and efficient as the finitely-dimensional geometry is. This would be a sort of replacement for the Feynman integral. Well, his hopes did not materialize. The conjectures suggested by physicists are still being proved by finitely-dimensional means; physics did not suggested any way even to make precise what kind of such infinitely-dimensional geometry is desired, and there is no interesting or useful genuinely infinitely-dimensional geometry. By “genuinely” I mean “not being essentially/morally equivalent to a unified sequence of finitely dimensional theories or theorems”.

To sum up, nothing dramatic resulted from the interaction of algebraic geometry and theoretical physics. I don not mean that nothing good resulted. In mathematics this interaction resulted in some quite interesting theorems and theories. It did not change the landscape completely, as Grothendieck’s ideas did, but it made it richer. As of physics, the question is still open. More and more people are taking the position that these untestable theories are completely irrelevant to the real world (and hence are not physics at all). There are no applications, and hence the whole activity cannot be considered as an applied one.


Next post: The role of the problems.

Wednesday, August 21, 2013

About some ways to work in mathematics

Previous post: New ideas.


From a comment by Tamas Gabal:

“...you mentioned that the problems are often solved by methods developed for completely different purposes. This can be interpreted in two different ways. First - if you work on some problem, you should constantly look for ideas that may seem unrelated to apply to your problem. Second - focus entirely on the development of your ideas and look for problems that may seem unrelated to apply your ideas. I personally lean toward the latter, but your advice may be different.”

Both ways to work are possible. There are also other ways: for example, not to have any specific problem to solve. One should not suggest one way or another as the right one. You should work in the way which suits you more. Otherwise you are unlikely to succeed and you will miss most of the joy.

Actually, my statement did not suggest either of these approaches. Sometimes a problem is solved by discovering a connection between previously unrelated fields, and sometimes a problem is solved entirely within the context in was posed originally. You never know. And how one constantly looks for outside ideas? A useful idea may be hidden deep inside of some theory and invisible otherwise. Nobody studies the whole mathematics in the hope that this will help to solve a specific problem.

I think that it would be better not to think in terms of this alternative at all. You have a problem to solve, you work on it in all ways you can (most of approaches will fail – this is the unpleasant part of the profession), and that’s it. The advice would be to follow development in a sufficiently big chunk of mathematics. Do not limit yourself by, say, algebra (if your field is algebra). The division of mathematics into geometry, algebra, and analysis is quite outdated. Then you may suddenly learn about some idea which will help you.

Also, you do not need to have a problem to begin with. Usually a mathematician starts with a precisely stated problem, suggested by the Ph.D. advisor. But even this is not necessary.

My own way to work is very close to the way M. Atiyah described as his way of work in an interview published in “The Mathematical Intelligencer” in early 1980ies (of course, I do not claim that the achievements are comparable). This interview is highly recommended; it is also highly recommended by T. Gowers. I believe that I explained how I work to a friend (who asked a question similar to yours one) before I read this interview. Anyhow, I described my way to him as follows. I do not work on any specific problem, except of my own working conjectures. I am swimming in mathematics like in a sea or river and look around for interesting things (the river of mathematics carries much more stuff than a real river). Technically this means that I follow various sources informing about the current developments, including talks, I read papers, both current and old ones, and I learn some stuff from textbooks. An advanced graduate level textbook not in my area is my favorite type of books in mathematics. I am doing this because this is that I like to do, not because I want to solve a problem or need to publish 12 papers during next 3 years. From time to time I see something to which, I feel, I can contribute. From time to time I see some connections which were not noticed before.

My work in “my area” started in the following way. I was familiar with a very new theory, which I learned from the only available (till about 2-3 years ago!) source: a French seminar devoted to its exposition. The author never wrote down any details. Then a famous mathematician visited us and gave a talk about a new (not published yet) remarkable theorem of another mathematician (it seems to me that it is good when people speak not only about their own work). The proof used at a key point an outside “Theorem A” by still another mathematicians. The speaker outlined its proof in few phrases (most speakers would just quote Theorem A, so I was really lucky). Very soon I realized (may be the same day or even during the talk) that the above new theory allows at least partially transplant Theorem A in a completely different context following the outline from the talk. But there is a problem: the conclusion of Theorem A tells that you are either in a very nice generic situation, or in an exceptional situation. In my context there are obvious exceptions, but I had no idea if there are non-obvious exceptions, and how to approach any exceptions. So, I did not even started to work on any details. 2-3 years later a preprint arrived in the mail. It was sent to me by reasons not related at all with the above story; actually, I did not tell anybody about these ideas. The preprint contained exactly what I needed: a proof that there are only obvious exceptional cases (not mentioning Theorem A). Within a month I had a proof of an analogue of Theorem A (this proof was quickly replaced by a better one and I am not able to reproduce it). Naturally, I started to look around: what else can be done in my context. As it turned out, a lot. And the theory I learned from that French seminar is not needed for many interesting things.

Could all this be planned in advance following some advice of some experienced person? Certainly, not. But if you do like this style, my advice would be: work this way. You will be not able to predict when you will discover something interesting, but you will discover. If this style does not appeal to you, do not try.

Note that this style is opposite to the Gowers’s one. He starts with a problem. His belief that mathematics can be done by computers is based on a not quite explicit assumption that his is the only way, and he keeps a place for humans in his not-very-science-fiction at least at the beginning: humans are needed as the source of problems for computers. I don’t see any motivation for humans to supply computers with mathematical problems, but, apparently, Gowers does. More importantly, a part of mathematics which admits solutions of its problems by computers will very soon die out. Since the proofs will be produced and verified by computers, humans will have no source of inspiration (which is the proofs).


Next post: Is algebraic geometry applied or pure mathematics?

Tuesday, August 20, 2013

New ideas

Previous post: Did J. Lurie solved any big problem?


Tamas Gabal asked:

“Dear Sowa, in your own experience, how often genuinely new ideas appear in an active field of mathematics and how long are the periods in between when people digest and build theories around those ideas? What are the dynamics of progress in mathematics, and how various areas are different in this regard?”

Here is my partial reply.


This question requires a book-length answer; especially because it is not very precisely formulated. I will try to be shorter. :- )

First of all, what should be considered as genuinely new ideas? How new and original they are required to be? Even for such a fundamental notion as an integral there are different choices. At one end, there is only one new idea related to it, which predates the discovery of the mathematics itself. Namely, it is idea of the area. If we lower our requirements a little, there will be 3 other ideas, associated with the works or Archimedes, Lebesque, and hardly known by now works of Danjoy, Perron, and others. The Riemann integral is just a modern version of Archimedes and other Ancient Greek mathematician. The Danjoy integral generalizes the Lebesgue one and has some desirable properties which the Lebesgue integral has not. But it turned out to be a dead end without any applications to topics of general interest. I will stop my survey of the theory of integration here: there are many other contributions. The point is that if we lower our requirements further, then we have much more “genuinely new” ideas.

It would be much better to speak not about some vague levels of originality, but about areas of mathematics. Some ideas are new and important inside the theory of integration, but are of almost no interest for outsiders.

You asked about my personal experience. Are you asking about what my general knowledge tells me, or what happened in my own mathematical life? Even if you are asking about the latter, it is very hard to answer. At the highest level I contributed no new ideas. One may even say that nobody after Grothendieck did (although I personally believe that 2 or 3 other mathematicians did), so I am not ashamed. I am not inclined to classify my work as analysis, algebra, geometry, topology, etc. Formally, I am assigned to one of these boxes; but this only hurts me and my research. Still, there is a fairly narrow subfield of mathematics to which I contributed, probably, 2 or 3 ideas. According to A. Weil, if a mathematician had contributed 1 new idea, he is really exceptional; most of mathematicians do not contribute any new ideas. If a mathematician contributed 2 or 3 new ideas, he or she would be a great mathematician, according to A. Weil. By this reason, I wrote “2 or 3” not without a great hesitation. I do not overestimate myself. I wanted to illustrate what happens if the area is sufficiently narrow, but not necessarily to the limit. The area I am taking about can be very naturally partitioned further. I worked in other fields too, and I hope that these papers also contain a couple of new ideas. For sure, they are of a level lower than the one A. Weil had in mind.

On one hand side this personal example shows another extreme way to count the frequency of new ideas. I don’t think that it would be interesting to lower the level further. Many papers and even small lemmas contain some little new ideas (still, much more do not). On the other side, this is important on a personal level. Mathematics is a very difficult profession, and it lost almost all its appeal as a career due to the changes of the universities (at least in the West, especially in the US). It is better to know in advance what kind of internal reward you may get out of it.

As of the timeframe, I think that a new idea is usually understood and used within a year (one has to keep in mind that mathematics is a very slow art) by few followers of the discoverer, often by his or her students or personal friends. Here “few” is something like 2-5 mathematicians. The mathematical community needs about 10 years to digest something new, sometimes it needs much more time. It seems that all this is independent of the level of the contribution. The less fundamental ideas are of interest to fewer people. So they are digested more slowly, despite being easier.

I don’t have much to say about the dynamics (what is the dynamics here?) of progress in mathematics. The past is discussed in many books about history of mathematics; despite I don’t know any which I could recommend without reservations. The only exception is the historical notes at the ends of N. Bourbaki books (they are translated into English and published as a separate book by Springer). A good starting point to read about 20th century is the article by M. Atiyah, “Mathematics in the 20th century”, American Mathematical Monthly, August/September 2001, p. 654 – 666. I will not try to predict the future. If you predict it correctly, nobody will believe you; if not, there is no point. Mathematicians usually try to shape the future by posing problems, but this usually fails even if the problem is solved, because it is solved by tools developed for other purposes. And the future of mathematics is determined by tools. A solution of a really difficult problem often kills an area of research, at least temporarily (for decades minimum).

My predictions for the pure mathematics are rather bleak, but they are based on observing the basic trends in the society, and not on the internal situation in mathematics. There is an internal problem in mathematics pointed out by C. Smorinsky in the 1980ies. The very fast development of mathematics in the preceding decades created many large gaps in the mathematical literature. Some theories lack readable expositions, some theorem are universally accepted but appear to have big gaps in their proofs. C. Smorinsky predicted that mathematicians will turn to expository work and will clear this mess. He also predicted more attention to the history of mathematics. A lot of ideas are hard to understand without knowing why and how they were developed. His predictions did not materialize yet. The expository work is often more difficult than the so-called “original research”, but it is hardly rewarded.


Next post: About some ways to work in mathematics.

Sunday, August 4, 2013

Did J. Lurie solved any big problem?

Previous post: Guessing who will get Fields medals - Some history and 2014.

Tamas Gabal asked the following question.

I heard a criticism of Lurie's work, that it does not contain startling new ideas, complete solutions of important problems, even new conjectures. That he is simply rewriting old ideas in a new language. I am very far from this area, and I find it a little disturbing that only the ultimate experts speak highly of his work. Even people in related areas can not usually give specific examples of his greatness. I understand that his objectives may be much more long-term, but I would still like to hear some response to these criticisms.

Short answer: I don't care. Here is a long answer.

Well, this is the reason why my opinion about Lurie is somewhat conditional. As I already said, if an impartial committee confirms the significance of Lurie’s work, it will remove my doubts and, very likely, will stimulate me to study his work in depth. It is much harder to predict what will be the influence of the actual committee. Perhaps, I will try to learn his work in any case. If he will not get the medal, then in the hope to make sure that the committee is wrong.

I planned to discuss many peculiarities of mathematical prizes in another post, but one of these peculiarities ought to be mentioned now. Most of mathematical prizes go to people who solved some “important problems”. In fact, most of them go to people who made the last step in solving a problem. There is recent and famous example at hand: the Clay $1,000,000.00 prize was awarded to Perelman alone. But the method was designed by R. Hamilton, who did a huge amount of work, but wasn’t able to made “the last step”. Perhaps, just because of age. As Perelman said to a Russian news agency, he declined the prize because in his opinion Hamilton’s work is no less important than his own, and Hamilton deserves the prize no less than him. It seems that this reason still not known widely enough. To the best of my knowledge, it was not included in any press-release of the Clay Institute. The Clay Institute scheduled the award ceremony like they knew nothing, and then held the ceremony as planned. Except Grisha Perelman wasn’t present, and he did not accept the prize in any sense.

So, the prizes go to mathematicians who did the last step in the solution of a recognized problem. The mathematicians building the theories on which these solutions are based almost never get Fields medals. Their chances are more significant when prize is a prize for the life-time contribution (as is the case with the Abel prize). There are few exceptions.

First of all, A. Grothendieck is an exception. He proved part of the Weil conjectures, but not the most important one (later proved by P. Deligne). One of the Weil conjectures (the basic one) was independently proved by B. Dwork, by a completely different and independent method, and published earlier (by the way, this is fairly accessible and extremely beautiful piece of work). The report of J. Dieudonne at the 1966 Congress outlines a huge theory, to a big extent still not written down then. It includes some theorems, like the Grothendieck-Riemann-Roch theorem, but: (i) GRR theorem does not solve any established problem, it is a radically new type of a statement; (ii) Grothendieck did not published his proof, being of the opinion that the proof is not good enough (an exposition was published by Borel and Serre); (iii) it is just a byproduct of his new way of thinking.

D. Quillen (Fields medal 1978) did solve some problems, but his main achievement is a solution of a very unusual problem: to give a good definition of so-called higher algebraic K-functors. It is a theory. Moreover, there are other solutions. Eventually, it turns out that they all provide equivalent definitions. But Quillen’s definitions (actually, he suggested two) are much better than others.

So, I do not care much if Lurie solved some “important problems” or not. Moreover, in the current situation I rather prefer that he did not solved any well-known problems, if he will get a Fields medal. The contrast with the Hungarian combinatorics, which is concentrated on statements and problems, will make the mathematics healthier.

Problems are very misleading. Often they achieve their status not because they are really important, but because a prize was associated with them (Fermat Last Theorem), or they were posed by a famous mathematicians. An example of the last situation is nothing else but the Poincaré Conjecture – in fact, Poincaré did not stated it as a conjecture, he just mentioned that “it would be interesting to know the answer to the following question”. It is not particularly important by itself. It claims that one difficult to verify property (being homeomorphic to a 3-sphere) is equivalent to another difficult to verify property (having trivial fundamental group). In practice, if you know that the fundamental group is trivial, you know also that your manifold is a 3-sphere.

Next post: New ideas.

Monday, July 29, 2013

Guessing who will get Fields medals - Some history and 2014

Previous post: 2014 Fields medalists?

This was a relatively easy task during about three decades. But it is nearly impossible now, at least if you do not belong to the “inner circle” of the current President of the International Mathematical Union. But they change at each Congress, and one can hardly hope to belong to the inner circle of all of them.

I would like to try to explain my approach to judging a particular selection of Fields medalists and to fairly efficiently guessing the winners in the past. This cannot be done without going a little bit into the history of Fields medals as it appears to a mathematician and not to a historian working with archives. I have no idea how to get to the relevant archives and even if they exist. I suspect that there is no written record of the deliberations of any Fields medal committee.

The first two Fields medals were awarded in 1936 to Lars Ahlfors and Jesse Douglas. It was the first award, and it wasn’t a big deal. It looks like that the man behind this choice was Constantin Carathéodory. I think that this was a very good choice. In my personal opinion, Lars Ahlfors is the best analyst of the previous century, and he did his most important work after the award, which is important in view of the terms of the Fields’ will. Actually, his best work was done after WWII. If not the war, it would be done earlier, but still after the award. J. Douglas solved the main problem about minimal surfaces (in the usual 3-dimensional space) at the time. He did with the bare hands things that we do now using powerful frameworks developed later. I believe that he became seriously ill soon afterward, but today I failed to find online any confirmation of this. Now I remember that I was just told about his illness. Apparently, he did not produce any significant results later. Would he continue to work on minimal surfaces, he could be forced to develop at least some of later tools.

The next two Fields medals were awarded in 1950 and since 1950 from 2 to 4 medals were awarded every 4 years. Initially the International Mathematical Union (abbreviated as IMU) was able to fund only 2 medals (despite the fact that the monetary part is negligible), but already for several decades it has enough funds for 4 medals (the direct monetary value remains to be negligible). I was told that awarding only 2 medals in 2002 turned out to be possible only after a long battle between the Committee (or rather its Chair, S.P. Novikov) and the officials of the IMU. So, I am not alone in thinking that sometimes there are no good enough candidates for 4 medals.

I apply to the current candidates the standard of golden years of both mathematics and the Fields medals. For mathematics, they are approximately 1940-1980, with some predecessors earlier and some spill-overs later. For medals, they are 1936-1986 with some spill-overs later. The whole history of the Fields medals can traced in the Proceedings of Congresses. They are interesting in many other respects too. For example, they contain a lot of very good expository papers (and many more of bad ones). It is worthwhile at least to browse them. Now they are freely available online: ICM Proceedings 1893-2010.

The presentation of work of 1954 medalists J.-P. Serre and K. Kodaira by H. Weyl is a pleasure to read. H. Weyl unequivocally tells that their mathematics is new and went into a new territory and is based on methods unknown to most of mathematicians at the time (in fact, this is still true). He even included an introduction to these methods in the published version.

The 1990 award at the Kyoto Congress was a turning point. Ludwig D. Faddeev was the Chairman of the Fields Medal Committee and the President of the IMU for the preceding 4 years. 3 out of 4 medals went to scientists significant part of whose works was directly related to his or his students’ works. The influence went in both directions: for one winner the influence went mostly from L.D. Faddeev and his pupils, for two other winners their work turned out to be very suitable for a synthesis with some ideas of L.D. Faddeev and his pupils. All these works are related to the theoretical physics. Actually, after reading the recollections of L.D. Faddeev and prefaces to his books, it is completely clear that he is a theoretical physicists at heart, despite he has some interesting mathematical results and he is formally (judging by the positions he held, for example) considered to be a mathematician.

The 1990 was the only year when one of the medals went to a physicist. Naturally, he never proved a theorem. But his papers from 1980-1994 contain a lot of mathematical content, mostly conjectures motivated by quantum field theory reasoning. There is no doubt that his ideas are highly original from the point of view of a mathematician (and much less so from the point of view of someone using Feynman’s integrals daily), that they provided mathematicians with a lot of problems to think about, and indeed resulted in quite interesting developments in mathematics. But many mathematicians, including myself, believe that the Fields medals should be awarded to outstanding mathematicians, and a mathematician should prove his or her claims. I don’t know any award in mathematics which could be awarded for conjectures only.

In 1994 one of the medals went to the son of the President of the IMU at the time. Many people think that this is far beyond any ethical norms. The President could resign from his position the moment the name of his son surfaced. Moreover, he should decline the offer of this position in 1990. It is impossible to believe that that guy did not suspect that his son will be a viable candidate in 2-3 years (if his son indeed deserved the medal). The President of IMU is the person who is able, if he or she wants, to essentially determine the winners, because the selection of the members of the Fields medal Committee is essentially in his or her hands (unless there is a insurrection in the community – but this never happened).

As a result, the system was completely destroyed in just two cycles without any changes in bylaws or procedures (since the procedures are kept in a secret, I cannot be sure about the latter). Still, some really good mathematicians got a medal. Moreover, in 2002 it looked like the system recovered. Unfortunately already in 2006 things were the same as in the 1990ies. One of the awards was outrageous on ethical grounds (completely different from 1994); the long negotiations with Grisha Perelman remind plays by Eugène Ionesco.

In the current situation I would be able to predict the winners if I would knew the composition of the committee. Since this is impossible, I will pretend that the committee is as impartial as it was in 1950-1986. This is almost (but not completely) equivalent to telling my preferences.

I would be especially happy if an impartial committee will award only 2 medals and Manjul Bhargava and Jacob Lurie will be the winners. I hope that their advisors are not on the committee. Their works look very attractive to me. I suspect that Jacob Lurie is the only mathematician working now and comparable with the giants of the golden age. But I do not have enough time to study his papers, or, rather, his books. They are just too long for everybody except people working in the same field. Usually they are hundreds pages long; his only published book (which covers only preliminaries) is almost 1000 pages long. Papers by Manjul Bhargava seem to be more accessible (definitely, they are much shorter). But I am not an expert in his field and I would need to study a lot before jumping into his papers. I do not have enough motivation for this now. An impartial committee would be reinforce my high opinion about their work and provide an additional stimulus to study them deeper. The problem is that I have no reason to expect the committee to be impartial.

Arthur Avila is very strong, or so tell me my expert friends. His field is too narrow for my taste. The main problem is that his case is bound to be political. It depends on the balance of power between, approximately, Cambridge, MA – Berkley and Rio de Janeiro – Paris. Here I had intentionally distorted the geolocation data.

The high ratings in that poll of Manjul Bhargava and Artur Avila are the examples of the “name recognition” I mentioned. I think that an article about Manjul Bhargava appeared even in the New York Times. Being a strong mathematician from a so-called developing country (it seems that the term “non-declining” would be better for English-speaking countries), Artur Avila is known much better than American or British mathematicians of the same level.

Most of mathematicians included in the poll wouldn’t be ever considered by anybody as candidates during the golden age. There would be several dozens of the same level in the same broadly defined area of mathematical. Sections of the Congress can serve as the first approximation to a good notion of an area of mathematics. And a Fields medalist was supposed to be really outstanding. Restricting myself by the poll list I prefer one of the following variants: either Bhargava, or Lurie, or both or no medals for the lack of suitable candidates.



Next post: Did J. Lurie solved any big problem?

Sunday, July 28, 2013

2014 Fields medalists?

Previous post: New comments to the post "What is mathematics?"

I was asked by Tamas Gabal about possible 2014 Fields medalists listed in an online poll. I am neither ready to systematically write down my thoughts about the prizes in general and Fields medals in particular, nor to predict who will get 2014 medals. I am sure that the world would be better without any prizes, especially without Fields medals. Also, in my opinion, no more than two persons deserve 2014 Fields medals. Instead of trying to argue these points, I will quote my reply to Tamas Gabal (slightly edited).

Would I know who the members of the Fields medal committee are, I would be able to predict medalists with 99% confidence. But the composition of the committee is a secret. In the past, the situation was rather different. The composition of the committee wasn't important. When I was just a second year graduate student, I compiled a list of 10 candidates, among whom I considered 5 to have significantly higher chances (I never wrote down this partition, and the original list is lost for all practical purposes). All 4 winners were on the list. I was especially proud of predicting one of them; he was a fairly nontraditional at the time (or so I thought). I cannot do anything like this now without knowing the composition of the committee. Recent choices appear to be more or less random, with some obvious exceptions (like Grisha Perelman).

Somewhat later I wrote:

In the meantime I looked at the current results of that poll. Look like the preferences of the public are determined by the same mechanism as the preferences for movie actors and actresses: the name recognition.

Tamas Gabal replied:

Sowa, when you were a graduate student and made that list of possible winners, did you not rely on name recognition at least partially? Were you familiar with their work? That would be pretty impressive for a graduate student, since T. Gowers basically admitted that he was not really familiar with the work of Fields medalists in 2010, while he was a member of the committee. I wonder if anyone can honestly compare the depth of the work of all these candidates? The committee will seek an opinion of senior people in each area (again, based on name recognition, positions, etc.) and will be influenced by whoever makes the best case... It's not an easy job for sure.

Here is my reply.

Good question. In order to put a name on a list, one has to know this name, i.e. recognize it. But I knew much more than 10 names. Actually, this is one of the topics I wanted to write about sometime in details. The whole atmosphere at that time was completely different from what I see around now. May be the place also played some role, but I doubt that its role was decisive. Most of the people around me liked to talk about mathematics, and not only about what they were doing. When some guy in Japan claimed that he proved the Riemann hypothesis, I knew about this the same week. Note that the internet was still in the future, as were e-mails. I had a feeling that I know about everything important going on in mathematics. I always had a little bit more curiosity than others, so I knew also about fields fairly remote from own work.

I do not remember all 10 names on my list (I remember about 7), but 4 winners were included. It was quite easy to guess 3 of them. Everybody would agree that they were the main contenders. I am really proud about guessing the 4th one. Nobody around was talking about him or even mentioned him, and his field is quite far from my own interests. To what extent I understood their work? I studied some work of one winner, knew the statements and had some idea about their proof for another one (later the work of both of them influenced a lot my own work, but mostly indirectly), and very well knew what are the achievements of the third one, why they are important, etc. I knew more or less just the statements of two main results of the 4th one, the one who was difficult to guess – for me. I was able to explain why this or that guy got the medal even to a theoretical physicist (actually did on one occasion). But I wasn’t able to teach a topic course about works of any of the 4.

At the time I never heard any complaints that a medal went to a wrong person. The same about all older awards. There was always a consensus in the mathematical community than all the people who got the medal deserved it. May be somebody else also deserved it too, but there are only 3 or 4 of them each time.

Mathematics is a human activity. This is one of the facts that T. Gowers prefers to ignore. Nobody verifies proofs line by line. Initially, you trust your guts feelings. If you need to use a theorem, you will be forced to study the proof and understand its main ideas. The same is true about the deepness of a result. You do not need to know all the proofs in order to write down a list like my list of 10 most likely winners (next time my list consisted of no more than 5 or 6, all winner were included). It seems that I knew the work of all guessed winners better than Gowers knew the work of 2010 medalists. But even if not, there is a huge difference between a graduate student trying to guess the current year winners, and a Fellow of the London Royal Society, a Fields medalist himself, who is deciding who will get 2010 medals. He should know more.

The job is surely not an easy one now, when it is all about politics. Otherwise it would be very pleasant.

Next post: Guessing who will get Fields medals - Some history and 2014.

Tuesday, June 11, 2013

New comments to the post "What is mathematics?"

Previous post: What is combinatorics and what this blog is about according to Igor Pak.


There is a new thread of comments to the post "What is mathematics?" started by Sandro Magi. The post is dated April 3; this thread started on May 31. The thread is concerned with only one claim in that post: proofs are not needed at all for applications of mathematics.

Unfortunately, the very first phrase of Sandro Magi set the tone for the rest of the discussion: "This is blatantly false". I do not like to discuss things in such a manner: with a total lack of cooperation. The combinatorialists at Gowers's blog are much more friendly even after a direct attack on their field. But, I believe that the reason is not any kind of malice of either party. This dialog is a good illustration of the near impossiblity of people thinking linearly and verbally to understand people thinking visually. In this case the dialog of a mathematician (every mathematician thinks at least partially visually) and a software engineer turned out to be impossible. I encountered the same sort of difficulties while discussing essentially any other subject, from the movies to the current affairs. I see also this lack of understanding of visual and "the big picture" issues in the design and functionality of almost all the software.

Still, it seems to me that there are some important ideas in that discussion. Of course, it would be better to give a coherent exposition. But an attempt to write it would take a lot of time, and who knows when it would be ready.

If somebody wants to comment on any issue there, I suggest to post comments here; this will result in a more clear structure of comments. As an additional benefit for the next 30 days the comments here are not moderated; they are moderated at that post. This rule is subject to change without notice. :-) I would like to ask Sandro Magi to continue our discussion in comments to "What is mathematics?" and not here (of course, he is under not obligation to continue); then the whole discussion will be at the same place.


Next post: 2014 Fields medalists?.

Saturday, June 1, 2013

What is combinatorics and what this blog is about according to Igor Pak

Previous post: About Timothy Gowers.

I came across the post “What is Combinatorics?” by Igor Pak. His intention seems to be refuting what is, in his opinion, a basic fault of my notes, namely, the lack of understanding of what is combinatorics.

“While myself uninterested in engaging in conversation, I figured that there got to be some old “war-time” replies which I can show to the Owl blogger.  As I see it, only the lack of knowledge can explain these nearsighted generalizations the blogger is showing.  And in the age of Google Scholar, there really is no excuse for not knowing the history of the subject, and its traditional sensitivities.”

Unfortunately, he did not show me anything. I come across his post while searching other things by Google. May be he is afraid that giving me a link in a comment will engage him in conversation? I would be glad to discuss these issues with him, but if he is not inclined, how can I insist? My intention was to write a comment in his blog, but for this one needs to be registered at WordPress.com. Google is more generous, as is T. Gowers, who allows non-WordPress comments in his blog.

Indeed, I don't know much about “traditional sensitivities” of combinatorics. A Google search resulted in links to his post and to numerous papers about “noise sensitivity”.

Beyond this, he is fighting windmills. I agree with most of what he wrote. Gian-Carlo Rota is my hero also. But I devoted a lot of time and space to explaining what I mean by "combinatorial" mathematics, and even stated that I use this term only because it is used by Gowers (and all my writings on this topics have a root in his ones), and I wasn't able to find quickly a good replacement (any suggestions?). See, for example, the beginning of the post “The conceptual mathematics vs. the classical (combinatorial) one” , as also other posts and my comments in Gowers's blog. In particular, I said that there is no real division between Gowers's “second culture” and “first culture”, and therefore there is no real division between combbinatorics and non-combinatorics.

So, for this blog the working definition of combinatorics is “branches of mathematics described in two essays by T. Gowers as belonging to the second culture and opposed in spirit to the Grothendieck's mathematics”.

I don't like much boxing of all theorems or papers into various classes, be they invented by AMS, NSF, or other “authorities”. I cannot say what is my branch of mathematics. Administrators usually assign to me the field my Ph.D. thesis belongs to, but I did not worked in it since then. I believe that the usual division of mathematics into Analysis, Algebra, Combinatorics, Geometry, etc. is hopelessly outdated.


Next post: New comments to the post "What is mathematics?"

Sunday, May 19, 2013

About Timothy Gowers

Previous post: The conceptual mathematics vs. the classical (combinatorial) one.


This post was started as a reply to a comment by vznvzn. It had quickly overgrown the comment format, but still is mostly a reply to vznvzn's remarks.

Gowers did not identify any “new mathematical strand/style”, and did not even attempt this. The opposition “conceptual” mathematics vs. “Hungarian” combinatorics was well known for quite a long time. It started to be associated with Hungary only after P. Erdös started to promote an extreme version of this style; but it was known for centuries. When I was in high school, it was known to any student attending a school with teaching of mathematics and physics on a fairly advanced level and having some interest in mathematics. Of course, this is not about UK (Gowers is a British mathematician). I don’t know enough about the schools there.

There is nothing new in looking at the big picture and doing what you called “mathematical anthropology” either. It is just an accident that you encountered such things in Gowers’s two essays first. I doubt that you are familiar with his writing style in mathematics, and even in more technical parts of his essay “Rough Structure and Classification” (by the way, it is available not only as a .ps file; I have a .pdf file in my computer and a hard copy). Gowers’s writing style and his mathematics are very left-brained. I saw no evidence that he even understands how right-brained mathematicians are working. Apparently he does not like the results of their thinking (but carefully tries to hide this in his popular writings). This may be the main reason why he believes that computers can do mathematics. It seems to me that his post-1998 kind of mathematics (I am not familiar enough with his work on Banach spaces, for which he was awarded Fields medal) indeed can be automated. If CS people do need this, then, please, go ahead. This will eliminate this kind of activities from mathematics without endangering the existence of mathematics or influencing its core.

But when Gowers writes some plain English prose, he is excellent. Note that the verbal communication is associated with the left half of the brain.

The left-right brain theory is not such a clear-cut dichotomy as it initially was. But I like it not so much as a scientific theory, but as a useful metaphor. Apparently, you are right and these days most of mathematicians are left-brained. But this is an artifact of the current system of education in Western countries and not an inherent property of mathematics. Almost all mathematics taught in schools and in undergraduate classes of universities is left-brained. This bias reaches its top during the first two years of undergraduate education, when students are required to take the calculus courses (and very often there are no other options). Only the left-brained aspect of calculus is taught in the US universities. Students are trained to perform some standard algorithms (a task which can be done now, probably, even by a smart phone). The calculus taught is the left-brained Leibniz’s calculus, while the right-brained Newton’s calculus is ignored. So, right-brained people are very likely not to choose mathematics as a career: their experience tells them that this is a very alien to them activity.

In fact, a mathematician usually needs both halves of the brain. Some people flourish using only the left half – if their abilities are very high. Others flourish using only right half. But the right half flourishing is only for geniuses, more or less. With all abilities concentrated in the right half, a mathematician is usually unable to write papers in a readable manner. If the results are extremely interesting, other will voluntarily take the job of reconstructing proofs and writing them down. (It would be much better if such work was rewarded in some tangible sense.) Otherwise, there will be no publications, and hence no jobs. The person is out of profession. On a middle level one can survive mostly on the left half by writing a huge amount of insignificant papers (the barrier to “huge” is much lower in mathematics than in other sciences). Similar effects were observed in special experiments involving middle school students. Right-brained perform better in mathematics in general, but if one considers only mathematically gifted students, both halves are equally developed.

What you consider as Gowers’s “project/program of analysis of different schools of thought” is not due to Gowers. This is done by mathematicians all the time, and some of them wrote very insightful papers and even books about this. His two essays are actually a very interesting material for thinking about “different schools”; they provide an invaluable insight into thinking of a partisan of only one very narrow school.

You are wrong in believing that history of mathematics has very long cycles. Definitely, not cycles, but let us keep this word. Mathematics of 1960 was radically different from mathematics of 1950. I personally observed two hardly predictable changes.

There is no “paradigm shift identified” by Gowers. Apparently, Kuhn's concept of paradigm shift does not apply to mathematics at all. The basic assumptions of mathematics had never changed, only refined.

There is another notion of a “shift”, namely, Wigner’s shift of the second kind. It happens when scientists lose interest in some class of problems and move to a different area. This is exactly what Gowers tries to accomplish: to shift the focus of mathematical research from conceptual (right-brained) one to the one that needs only pure “executive power” (left-brained, the term belongs to G. Hardy) at the lowest level of abstraction. If he succeeds, the transfer of mathematics from humans to computers will be, probably, possible. But it will be another “mathematics”. Our current mathematics is a human activity, involving tastes, emotions, a sense of beauty, etc. If it is not done by humans and especially if the proofs are not readable by humans (as is the case with all computer-assisted proofs of something non-trivial to date), it is not mathematics. The value for the humanity of theorems about arithmetic progressions is zero if they are proved by computers. It is near zero anyhow.

Here all three main directions of Gowers’s activities merge: the promotion of combinatorics; the attempt to eliminate human mathematics; his drive for influence and power.

Thanks for appreciating my comments as “visionary”, no matter of that kind. But they are not. What I was doing in my comments to two Gowers’s posts and in this blog is just pointing out some facts, which are, unfortunately, unknown to Gowers’s admirers, especially to the young ones or experts in other fields. Hardly anything mentioned is new; recent events are all documented on the web. I intentionally refrain from using ideas which may be interpreted as my own – they would be dismissed on this ground alone.

I agree that the discussion in Gowers’s blog eventually turned out to be interesting. But only after the people who demanded me to identify myself and asked why I allow myself to criticize Gowers have left. Then several real mathematicians showed up, and the discussion immediately started to make sense. I hope that the discussion in Gowers’s blog was useful at least for some people. The same about this blog. Right now it shows up as 7th entry in Google search on “t gowers mathematics” (the 2nd entry is Wiki; other five at the top are his own blogs, pages, etc.) It will go down, of course: I have no intention to devote all my life to an analysis of his mathematics and his personality. And, hopefully, he will eventually cease to attract such an interest as now.

In any case, at least one person definitely benefitted from all this – myself. These discussions helped me to clarify my own views and ideas.


Next post: What is combinatorics and what this blog is about according to Igor Pak.

Sunday, April 7, 2013

The Hungarian Combinatorics from an Advanced Standpoint

Previous post: Conceptual mathematics vs. the classical (combinatorial) one.

Again,  this post is a long reply to questions posed by ACM. It is a complement to the previous post "Conceptual mathematics vs. the classical (combinatorial) one". The title is intentionally similar to the titles of three well known books by F. Klein.


First, the terminology in “Conceptual mathematics vs. the classical (combinatorial) one” is my and was invented at the spot, and the word "classical" is a very bad choice. I should find something better. The word "conceptual" is good enough, but not as catchy as I may like. I meant something real, but as close as possible to the Gowers's idea of "two cultures". I do not believe in his theory anymore; but by simply using his terms I will promote it.

Another choice, regularly used in discussions in Gowers's blog is "combinatorial". It looks like it immediately leads to confusion, as one may see from your question (but not only). First of all (I already mentioned it in Gowers's blog or here), there two rather different types of combinatorics. At one pole there is the algebraic combinatorics and most of the enumerative combinatorics. R. Stanley and the late J.-C. Rota are among the best (or the best) in this field. One can give even a more extreme example, mentioned by M. Emerton: symmetric group and its representations. Partitions of natural numbers are at the core of this theory, and in this sense it is combinatorics. One the other hand, it was always considered as a part of the theory of representations, a highly conceptual branch of mathematics.

So, there is already a lot of conceptual and quite interesting combinatorics. And the same time, there is Hungarian combinatorics, best represented by the Hungarian school. It is usually associated with P. Erdös and since the last year Abel prize is also firmly associated with E. Szemerédi. Currently T. Gowers is its primary spokesperson, with T. Tao serving as supposedly independent and objective supporter. Of course, all this goes back for centuries.

Today the most obvious difference between these two kinds of combinatorics is the fact that the algebraic combinatorics is mostly about exact values and identities, and Hungarian combinatorics is mostly about estimates and asymptotics. If no reasonable estimate is in sight, the existence is good enough. This is the case with the original version of Szemerédi's theorem. T. Gowers added to it some estimates, which are huge but a least could be written down by elementary means. He also proved that any estimate should be huge (in a precise sense). I think that the short paper proving the latter (probably, it was Gowers's first publication in the field) is the most important result around Szemerédi’s theorem. It is strange that it got almost no publicity, especially if compared with his other papers and Green-Tao's ones. It could be the case that this opinion results from the influence of a classmate, who used to stress that lower estimates are much more deep and important than the upper ones (for positive numbers, of course), especially in combinatorial problems.

Indeed, I do consider Hungarian combinatorics as the opposite of all new conceptual ideas discovered during the last 100 years. This, obviously, does not mean that the results of Hungarian combinatorics cannot be approached conceptually. We have an example at hand: Furstenberg’s proof of Szemerédi theorem. It seems that it was obtained within a year of the publication of Szemerédi’s theorem (did not checked right now). Of course, I cannot exclude the possibility that Furstenberg worked on this problem (or his framework for his proof without having this particular application as the main goal) for years within his usual conceptual framework, and missed by only few months. I wonder how mathematics would look now if Furstenberg would be the first to solve the problem.

One cannot approach the area (not the results alone) of Hungarian combinatorics from any conceptual point of view, since the Hungarian combinatorics is not conceptual almost by the definition (definitely by its description by Gowers in his “Two cultures”). I adhere to the motto “Proofs are more important than theorems, definitions are more important than proofs”. In fact, I was adhering to it long before I learned about this phrase; this was my taste already in the middle school (I should confess that I realized this only recently). Of course, I should apply it uniformly. In particular, the Hungarian style of proofs (very convoluted combinations of well known pieces, as a first approximation) is more essential than the results proved, and the insistence on being elementary but difficult should be taken very seriously – it excludes any deep definitions.

I am not aware of any case when “heuristic” of Hungarian combinatorics lead anybody to conceptual results. The theorems can (again, Furstenberg), but they are not heuristics.

I am not in the business of predicting the future, but I see only two ways for Hungarian combinatorics, assuming that the conceptual mathematics is not abandoned. Note that still not even ideas of Grothendieck are completely explored, and, according to his coauthor J. Dieudonne, there are enough ideas in Grothendieck’s work to occupy mathematicians for centuries to come – the conceptual mathematics has no internal reasons to die in any foreseeable future. Either the Hungarian combinatorics will mature by itself and will develop new concepts which eventually will turn it into a part of conceptual mathematics. There are at least germs of such development. For example, matroids (discovered by H. Whitney, one of the greatest topologists of the 20th century) are only at the next level of abstraction after the graphs, but matroids is an immensely useful notion (unfortunately, it is hardly taught anywhere, which severely impedes its uses). Or it will remain a collection of elementary tricks, and will resemble more and more the collection of mathematical Olympiads problems. Then it will die out and forgotten.

I doubt that any area of mathematics, which failed to conceptualize in a reasonable time, survived as an active area of research. Note that the meaning of the word “reasonable” changes with time itself; at the very least because of the huge variations of the number of working mathematicians during the history. Any suggestions of counterexamples?



Next post: About Timothy Gowers.

Friday, April 5, 2013

The conceptual mathematics vs. the classical (combinatorial) one.

Previous post: Simons's video protection, youtube.com, etc.

This post is an attempt to answer some questions of ACM in a form not requiring knowledge of Grothendieck ideas or anything simlilar.

But it is self-contained and touches upon important and hardly wide known issues.

--------------------------------------------


It is not easy to explain how conceptual theorems and proofs, especially the ones of the level close to the one of Grothendieck work, could be at the same time more easy and more difficult at the same time. In fact, they are easy in one sense and difficult in another. The conceptual mathematics depends on – what one expect here? – on new concepts, or, what is the same, on the new definitions in order to solve new problems. The hard part is to discover appropriate definitions. After this proofs are very natural and straightforward up to being completely trivial in many situations. They are easy. Classically, the convoluted proofs with artificial tricks were valued most of all. Classically, it is desirable to have a most elementary proof possible, no matter how complicated it is.

A lot of efforts were devoted to attempts to prove the theorem about the distribution of primes elementary. In this case the requirement was not to use the theory of complex functions. Finally, such proof was found, and it turned out to be useless. Neither the first elementary proof, nor subsequent ones had clarified anything, and none helped to prove a much more precise form of this theorem, known as Riemann hypothesis (this is still an open problem which many consider as the most important problem in mathematics).

Let me try to do this using a simple example, which, perhaps, I had already mentioned (I am sure that I spoke about it quite recently, but it may be not online). This example is not a “model” or a toy, it is real.

Probably, you know about the so-called Fundamental Theorem of Calculus, usually wrongly attributed to Newton and Leibniz (it was known earlier, and, for example, was presented in the lectures and a textbook of Newton's teacher, John Barrow). It relates the derivatives with integrals. Nothing useful can be done without it. Now, one can integrate not only functions of real numbers, but also functions of two variables (having two real numbers as the input), three, and so on. One can also differentiate functions of several variables (basically, by considering them only along straight lines and using the usual derivatives). A function of, say, 5 variables has 5 derivatives, called its partial derivatives.

Now, the natural question to ask is if there is an analogue of the Fundamental Theorem of Calculus for functions of several variables. In 19th century such analogues were needed for applications. Then 3 theorems of this sort were proved, namely, the theorems of Gauss-Ostrogradsky (they discovered it independently of each other, and I am not sure if there was a third such mathematician or not), Green, and Stokes (some people, as far as I remember, attribute it to J.C. Maxwell, but it is called the Stokes theorem anyhow). The Gauss-Ostrogradsky theorem deals with integration over 3-dimensional domains in space, the Green theorem with 2 dimensional planar domains, and the Stokes theorem deals with integration over curved surfaces in the usual 3-dimensional space. I hope that I did not mix them up; the reason why this could happen is at the heart of the matter. Of course, I can check this in moment; but then an important point would be less transparent.

Here are 3 theorems, clearly dealing with similar phenomena, but looking very differently and having different and not quite obvious proofs. But there are useful functions of more than 3 variables. What about them? There is a gap in my knowledge of the history of mathematics: I don’t know any named theorem dealing with more variables, except the final one. Apparently, nobody wrote even a moderately detailed history of the intermediate period between the 3 theorems above and the final version.

The final version is called the Stokes theorem again, despite Stokes has nothing do with it (except that he proved that special case). It applies to functions of any number of variables and even to functions defined on so-called smooth manifolds, the higher-dimensional generalization of surfaces. On manifolds, variables can be introduced only locally, near any point; and manifolds themselves are not assumed to be contained in some nice ambient space like the Euclidean space. So, the final version is much more general. And the final version has exactly the same form in all dimension, but the above mentioned 3 theorems are its immediate corollaries. This is why it is so easy to forget which names are associated to which particular case.

And – surprise! – the proof of general Stokes theorem is trivial. There is a nice short (but very dense) book “Calculus on manifolds” by M. Spivak devoted to this theorem.  I recommend reading its preface to anybody interested in one way or another in mathematics. For mathematicians to know its content is a must. In the preface M. Spivak explains what happened. All the proofs are now trivial because all the difficulties were transferred into definitions. In fact, this Stokes theorem deals with integration not of functions, but of the so-called differential form, sometimes called also exterior forms. And this is a difficult notion. It requires very deep insights to discover it, and it still difficult to learn it. In the simplest situation, where nothing depends on any variables, it was discovered by H. Grassmann in the middle of 19th century. The discoveries of this German school teacher are so important that the American Mathematical Society published an English translation of one of his books few years ago. It is still quite a mystery how he arrived at his definitions. With the benefits of hindsight, one may say that he was working on geometric problems, but was guided by the abstract algebra (which did not exist till 1930). Later on his ideas were generalized in order to allow everything to depend on some variables (probably, E. Cartan was the main contributor here). In 1930ies the general Stokes theorem was well known to experts. Nowadays, it is possible to teach it to bright undergraduates in any decent US university, but there are not enough of such bright undergraduates. It should be in some of the required course for graduate students, but one can get a Ph.D. without being ever exposed to it.

To sum up, the modern Stokes theorem requires learning a new and not very well motivated (apparently, even the Grassmann did not really understood why he introduced his exterior forms) notion of differential forms and their basic properties. Then you have a theorem from which all 19th century results follow immediately, and which is infinitely more general than all of them together. At the same time it has the same form for any number of variables and has a trivial proof (and the proofs of the needed theorems about differential forms are also trivial). There are no tricks in the proofs; they are very natural and straightforward. All difficulties were moved into definitions.

Now, what is hard and what is difficult? New definitions of such importance are infinitely rarer than new theorems. Most mathematicians of even the highest caliber did not discover any such definition. Only a minority of Abel prize winner discovered anything comparable, and it is still too early to judge if their definitions are really important. So, discovering new concepts is hard and rare. Then there is a common prejudice against anything new (I am amazed that it took more than 15 years to convince public to buy HD TV sets, despite they are better in the most obvious sense), and there is a real difficulties in learning these new notions. For example, there is a notion of a derived category (it comes from the Grothendieck school), which most of mathematicians consider as difficult and hardly relevant. All proofs in this theory are utterly trivial.

Final note: the new conceptual proofs are often longer than the classical proofs even of the same results. This is because in the classical mathematics various tricks leading to shortcut through an argument are highly valued, and anything artificial is not valued at all in the conceptual mathematics.



Next post: The Hungarian Combinatorics from an Advanced Standpoint.

Wednesday, April 3, 2013

Simons's video protection, youtube.com, etc.

Previous post: What is mathematics?


Technically, this is a reply to a comment by Dmitri Pavlov. But it is only tangentially related to the discussion in Gowers's blog. At the same time, I see in it a good occasion to start a discussion of issues related to the infamous by now copyright law. This notion had some worthwhile components just 10 years ago. Now it looks like a complete nonsense obstructing progress. It does not even succeed in making big movie studios and music labels (the main defenders of extreme forms of the copyright law) richer. At the very least, no proof was ever offered.

------------------------------------------------------------------------


Dear Dmitri,

Thanks a lot. You certainly know that I am not an expert in software. I am not using UNIX, I am using Windows, and I have no idea what to do with your code. I definitely have the latest version of Adobe flash, or at least the previous one. I doubt that there is some version released in March which is required to deal with a video posted more than a year ago. 

The browsers I use most of the time, Firefox and Opera, have several extensions allowing downloading almost everything by just pressing a button and selecting the quality of the stream. These extensions don’t see any video content on Simons’s page.

I got the idea, and it looks like I will be able download these files even without your list. But your list will save me a lot of time, if I decide to do this (at the time, I am not inclined).

But this does not mean that files are not protected in the legal sense. Files are not protected if there is either a download button, or a statement like the following: "You are free to inspect our code and download our videos if you will find a way to do this". Your suggestion amounts to doing the latter without permission.

A third party software told me that it is able to see the video (actually, another one, much more interesting for me), but will not download it because this would be illegal. Moreover, the software stated that I have only one legal option to have the video in my computer: to take screenshots of each frame.

“If they were aware of this issue,
 they would almost certainly
 add HTML5 video elements and direct download links.”

You see, I contacted a mathematician who is to a big extent responsible for this whole program of interviews, posting of them, etc. He agreed that videos should be downloadable, and said that he will contact appropriate persons. I have no reasons not to trust him. So, the people at the Simons foundation are aware of this for more than a year and did nothing.

Even if these links would be on the page (I am not able to see them, and I don’t know what do you mean by “plain video URLs are embedded in the text.”), there are 26 files for Lovasz alone. This is a far cry from being convenient. I will need to use an Adobe video editor (which I accidentally do have on another computer by a reason completely independent from mathematics – most mathematicians don’t), and to glue them in one usable file. It would be even possible to add a menu with direct links to these 26 parts, very much like on a DVD or a Blu-ray disc. But, frankly, why should I to this? Is Simons’s salary (which he determines himself, being the president and the CEO of his company at the same time) not sufficient to make a small charitable contribution and hire a local student to do some primitive video-editing? His salary a year or two ago was over 2 billions per year. I did not check the latest available data.

Concerning youtube.com, I would like to say that if something looks like an active attempt to protect a video for you, it is not necessarily so for others. Personally, I don’t care how their links are generated. For me, it is enough to have a button (even three different!) at my browser which will find this link without my participation and will download the file, or even several simultaneously. Moreover, I doubt that youtube.com really wants to protect videos from downloading. They have a lot of 1080p (Full HD) videos, and I don’t know any way to see them in 1080px high window at youtube site. There are two choices: a smaller window, or a full screen. I haven’t seen a computer monitor with exactly 1080px height. Anyhow, the one I have is 1600px high, and upconverting to this size leads to a noticeable decrease of quality. The only meaningful option for 1080p content is to download it.

Buy the way, the Simons foundation site suffers from a similar, but much more severe problem. The size of the video window appears to be small and fixed. And they may stream into it 1080p content; this was the case with the video I wanted to watch a year+ ago. A lot of bandwidth is wasted. And my ISP hardly can handle streaming 1080p content anyhow.

Please, do not think that I am an admirer of youtube.com policies. Nothing there is permanent, i.e. everything potentially interesting should be downloaded. Their crackdown on the alleged (no proof is needed) copyright violators is the online version of the last year raids of the US special forces in several countries simultaneously.



Next post: To appear.

What is mathematics?

Previous post: D. Zeilberger's Opinions 1 and 62.


This is a reply to a comment by vznvzn to a post in this blog.


I am not in the business of predicting the future. I have no idea what people will take seriously in 2050. I do not expect that Gowers’s fantasies, or yours, which are nearly the same, will turn into reality. I wouldn’t be surprised that the humanity will return by this time to the Middle Ages, or even to the pre-historical level. But I wouldn't bet on this even a dollar. At the same time mathematics can disappear very quickly. Mathematics is an extremely unusual and fragile human activity, which appeared by an accident, then disappeared for almost a thousand years, then slowly returned. The flourishing of mathematics in 20th century is very exceptional. In fact, we already have much more mathematicians (this is true independently of which meaning of the word “mathematician” one uses) than society, or, if you prefer, humanity needs.

The meaning of words “mathematics”, “mathematician” becomes important the moment the “computer-assisted proofs” are mentioned. Even Gowers agrees that if his projects succeeds, there will be no (pure) mathematicians in the current (or 300, or 2000 years old) sense. The issue can be the topic of a long discussion, of a serious monograph which will be happily published by Princeton University Press, but I am not sure that you are interested in going into it deeply. Let me say only point out that mathematics has any value only as human activity. It is partially a science, but to a big extent it is an art. All proofs belong to the art part. They are not needed at all for applications of mathematics. If a proof cannot be understood by humans (like the purported proofs in your examples), they have no value. Or, rather, their value is negative: a lot of human time and computer resources were wasted.

Now, a few words about your examples. The Kepler conjecture is not an interesting mathematical problem. It is not related to anything else, and its solution is isolated also. Physicists had some limited interest in it, but for them it obvious for a long time (probably, from the very beginning) that the conjectured result is true.

4 colors problem is not interesting either. Think for a moment, who may care  if every map can be colored by only 4 colors? In the real word we have much more colors at our disposal, and in mathematics we have a beautiful, elementary, but conceptual proof of a theorem to the effect that 5 colors are sufficient. This proof deserves to be studied by every student of mathematics, but nobody rushed to study the Appel-Haken “proof” of 4-colors “theorem”. When a graduate student was assigned the task to study it (and, if I remember correctly, to reproduce the computer part for the first time), he very soon found several gaps. Then Haken published an amusing article, which can be summarized as follows. The “proof” has such a nature that it may have only few gaps and to find even one is extremely difficult. Therefore, if somebody did found a gap, it does not matter. This is so ridiculous that I am sure that my summary is not complete. Today it is much easier than at that time to reproduce the computer part, and the human part was also simplified (it consists in verifying by hand some properties of a bunch of graphs, more than 1,000 or even 1,500 in the Appel-Haken “proof”, less than 600 now.)

Wiles deserved a Fields medal not because he proved LFT; he deserved it already in 1990, before he completed his proof. In any case, the main and really important result of his work is not the proof of the LFT (this is for popular books), but the proof of the so-called modularity conjecture for nearly all cases (his students later completed the proof of the modularity conjecture for the exceptional cases). Due to some previous work by other mathematicians, all very abstract and conceptual, this immediately implies the LFT. Before this work (mid-1980ies), there was no reason even to expect that the LFT is true. Wiles himself learned about the LFT during his school years (every future mathematician does) and dreamed about proving it (only few have such dreams). But he did not move a finger before it was reduced to the modularity conjecture. Gauss, who was considered as King of Mathematics already during his lifetime, was primarily a number theorist. When asked, he dismissed the LFT as a completely uninteresting problem: “every idiot can invent zillions of such problems, simply stated, but requiring hundreds years of work of wise men to be solved”. Some banker already modified the LFT into a more general statement not following from the Wiles work and even announced a monetary prize for the proof of his conjecture. I am not sure, but, probably, he wanted a solution within a specified time frame; perhaps, there is no money for this anymore.


Let me tell you about another, mostly forgotten by now , example. It is relevant here because, like the 3x+1 problem (the Collatz conjecture), it deals with iterations of a simple rule and by another reason, which I will mention later. In other words, both at least formally belong to the field of dynamical systems, being questions about iterations.

My example is the Feigenbaum conjecture about iterations of maps of an interval to itself. Mitchell Feigenbaum is theoretical physicist, who was lead to his conjecture by physical intuition and extensive computer experiments. (By the way, I have no objections when computers are used to collect evidence.) The moment it was published, it was considered to be a very deep insight even as a conjecture and as a very important and challenging conjecture. The Feigenbaum conjecture was proved with some substantial help of computers only few years later by an (outstanding independently of this) mathematical physicists O. Lanford. For his computer part to be applicable, he imposed additional restrictions on the maps considered. Still, the generality is dear to mathematicians, but not to physicists, and the problem was considered to be solved. In a sense, it was solved indeed. Then another mathematician, D. Sullivan, who recently moved from topology to dynamical systems, took the challenge and proved the Feigenbaum conjecture without any assistance from the computers. This is quite remarkable by itself, mathematicians usually abandon problem or even the whole its area after a computer-assisted argument. Even more remarkable is the fact that his proof is not only human-readable, but provides a better result. He lifted the artificial Lanford’s restrictions.

The next point (the one I promised above) is concerned with standards Lanford applied to the computer-assisted proofs. He said and wrote that a computer-assisted proof is a proof in any sense only if the author not only understands its human-readable part, but also understands every line of the computer code (and can explain why this code does that is claimed it does). Moreover, the author should understand all details of the operations system used, up to the algorithm used to divide the time between different programs. For Lanford, a computer-assisted proof should be understandable to humans in every detail, except it may take too much time to reproduce the computations themselves.

Obviously, Lanford understood quite well that mathematics is a human activity.

Compare this with what is going on now. People freely use Windows OS (it seems that even at Microsoft nobody understands how and what it does), and the proprietary software like Mathematica™, for which the code is a trade secret and the reverse engineering is illegal. From my point of view, this fact alone puts everything done using this software outside not only of mathematics, but of any science.


Next post: To appear.

Monday, April 1, 2013

D. Zeilberger's Opinions 1 and 62

Previous post: Combinatorics is not a new way of looking at mathematics.

While this is a reply to a comment by  Shubhendu Trivedi in Gowers's blog, I hope that following is interetisting independently of the discussion there.


Opinion 1. Zeilberger admits there that he has no idea about the methods used even in his examples (the 4th paragraph).

He is correct that Jones polynomial is to a big extent a combinatorial gadget. Probably, he is not aware that this gadget applies to topology only if you have a purely topological theorem at your disposal (proved by Reidemeister in 1930s, it remains to be a non-trivial theorem). He may be not aware also of the fact that Jones polynomial did not led to solution of any problem of interest to topologists at the time. The proof of the so-called Tait conjecture was highly publicized, and many people believe that this was an important conjecture. Fortunately, there is a document proving that this is not the case. Namely, R. Kirby with the help of many other topologists compiled around 1980 a list of problems in topology. About 15 years later he published an updated and expanded version. Both editions consist of several parts, one of which is devoted to problems in knot theory. Tait conjecture is about knots and it is not in the 1980 list (by time Kirby started to prepare the new expanded list, it was already proved). Nobody was interested in it, and its solution has no applications.

Eventually, the theory of Jones polynomial and its generalizations turned into an independent self-contained field, desperately searching for connections with other branches of mathematics or at least with topology itself.

But D. Zeilberger should be aware that the Tutte polynomial belongs to the conceptual mathematics. It is one of the precursors of one of the main ideas of Grothendieck, namely, of K-theory. There is no reasons to think that Grothendieck was aware of Tutte's work, but Tutte polynomial is still an essentially a K-theoretic construction.

The Seiberg-Witten ideas have nothing to do with combinatorics. The Seiberg-Witten invariants are based on topology and some advanced parts of the theory of nonlinear PDE. In the last decade some attempts to get rid of PDE in this theory were partially successful. They involve some rather combinatorics-like looking pictures. I wonder if Zeilberger wrote anything about this. But the situation is essentially the same as with the Tutte polynomial. These quite remarkable attempts are inspired, not always directly, by such abstract ideas as 2-categories, for example. Note that the category theory is the most abstract part of mathematics, except, may be, modern set theory (which is a field in which only very few mathematicians are working).


Opinion 62. First, the factual mistakes.

Grothendieck did not dislike other sciences. In particular, at the age of approximately 42-46 he developed a serious interest in biology. Ironically, in the same paragraph Zeilberger commends I.M. Gelfand for his interest in biology.

Major applications of the algebraic geometry were not initiated by the “Russian” school, but the soviet mathematicians indeed embraced this field very enthusiastically. And initial applications did not involve any Grothendieck-style algebraic geometry.

More important is the fact that Zeilberger’s opinions are self-contradicting. He dislikes the abstract (in fact, the conceptual) mathematics, and at the same time praises the “Russian” school for applications of exactly the same abstract conceptual methods.

Zeilberger writes: “Grothendieck was a loner, and hardly collaborated”. Does he really knows at least a little about Grothendieck and his work? Grothendieck’s rebuilding of algebraic geometry in an abstract conceptual framework was a highly collaborative enterprise. He has almost no papers in algebraic geometry published by him alone. The foundational text EGA, Elements of Algebraic Geometry, has Grothendieck and Dieudonne as authors (in this order, violating the tradition to list the authors of mathematical papers in alphabetic order) and was written by Dieudonne alone. More advanced things were published as SGA, Seminar on Algebraic Geometry, and most of this series of Springer Lecture Notes in Mathematics Volumes is authored by Grothendieck and various collaborators. Some present his ideas, but don’t have him as an author. One of them is written by P. Deligne and authored by P. Deligne alone.

Zeilberger has no idea about what kind of youth was given to Grothendieck and presents some (insulting, I would say) conjectures about it. Grothendieck was always concerned with injustice done to other people, in particular within mathematics. His elevated sense of (in)justice eventually led him to (fairly misguided, I believe, but sincere and well-intentioned) political activity. He was initially encouraged by colleagues, who abandoned him when this enterprise started to require more than a lip service.

The phrase “...was already kicked out of high-school (for political reasons), so could focus all his rebellious energy on innovative math” is obviously absurd to everyone even superficially familiar with the history of the USSR. If someone was persecuted on political grounds, then (he could by summarily executed, but at least) any mathematical or other scientific activity would be impossible for him for life. There would be no ways to be a professor of Moscow State University, or taking part in the soviet atomic-nuclear project.

Surely, Gelfand said something like Zeilberger writes about the future of combinatorics. I never was at the Gelfand seminar, neither in Moscow, nor in Rutgers. But there are his publications, from which one can get the idea what kind of combinatorics Gelfand was interested in. Would Zeilberger attempted to read any of these papers, he would hardly see there even a trace of what is so dear to him. All works of Gelfand are highly conceptual.

Finally, it is worth to mention that Gelfand always wanted to be the one who determines the fashion, not the one who follows it. Of course, I see nothing wrong with it. In the late 60ies he regretted that he missed the emergence of a new field: algebraic and differential topology. He attempted to rectify this by two series of papers (with coauthors, by this time he did not published anything under only his name), one about cohomology of infinitely dimensional Lie algebras, another about a (conjectural) combinatorial definition of  Pontrjagin classes (a basic notion in topology). It is very instructive to see what was a “combinatorial definition” for I.M. Gelfand.


Next post: What is mathematics?