About the title

About the title

I changed the title of the blog on March 20, 2013 (it used to have the title “Notes of an owl”). This was my immediate reaction to the news the T. Gowers was presenting to the public the works of P. Deligne on the occasion of the award of the Abel prize to Deligne in 2013 (by his own admission, T. Gowers is not qualified to do this).

The issue at hand is not just the lack of qualification; the real issue is that the award to P. Deligne is, unfortunately, the best compensation to the mathematical community for the 2012 award of Abel prize to Szemerédi. I predicted Deligne before the announcement on these grounds alone. I would prefer if the prize to P. Deligne would be awarded out of pure appreciation of his work.



I believe that mathematicians urgently need to stop the growth of Gowers's influence, and, first of all, his initiatives in mathematical publishing. I wrote extensively about the first one; now there is another: to take over the arXiv overlay electronic journals. The same arguments apply.



Now it looks like this title is very good, contrary to my initial opinion. And there is no way back.

Sunday, April 7, 2013

The Hungarian Combinatorics from an Advanced Standpoint

Previous post: Conceptual mathematics vs. the classical (combinatorial) one.

Again,  this post is a long reply to questions posed by ACM. It is a complement to the previous post "Conceptual mathematics vs. the classical (combinatorial) one". The title is intentionally similar to the titles of three well known books by F. Klein.


First, the terminology in “Conceptual mathematics vs. the classical (combinatorial) one” is my and was invented at the spot, and the word "classical" is a very bad choice. I should find something better. The word "conceptual" is good enough, but not as catchy as I may like. I meant something real, but as close as possible to the Gowers's idea of "two cultures". I do not believe in his theory anymore; but by simply using his terms I will promote it.

Another choice, regularly used in discussions in Gowers's blog is "combinatorial". It looks like it immediately leads to confusion, as one may see from your question (but not only). First of all (I already mentioned it in Gowers's blog or here), there two rather different types of combinatorics. At one pole there is the algebraic combinatorics and most of the enumerative combinatorics. R. Stanley and the late J.-C. Rota are among the best (or the best) in this field. One can give even a more extreme example, mentioned by M. Emerton: symmetric group and its representations. Partitions of natural numbers are at the core of this theory, and in this sense it is combinatorics. One the other hand, it was always considered as a part of the theory of representations, a highly conceptual branch of mathematics.

So, there is already a lot of conceptual and quite interesting combinatorics. And the same time, there is Hungarian combinatorics, best represented by the Hungarian school. It is usually associated with P. Erdös and since the last year Abel prize is also firmly associated with E. Szemerédi. Currently T. Gowers is its primary spokesperson, with T. Tao serving as supposedly independent and objective supporter. Of course, all this goes back for centuries.

Today the most obvious difference between these two kinds of combinatorics is the fact that the algebraic combinatorics is mostly about exact values and identities, and Hungarian combinatorics is mostly about estimates and asymptotics. If no reasonable estimate is in sight, the existence is good enough. This is the case with the original version of Szemerédi's theorem. T. Gowers added to it some estimates, which are huge but a least could be written down by elementary means. He also proved that any estimate should be huge (in a precise sense). I think that the short paper proving the latter (probably, it was Gowers's first publication in the field) is the most important result around Szemerédi’s theorem. It is strange that it got almost no publicity, especially if compared with his other papers and Green-Tao's ones. It could be the case that this opinion results from the influence of a classmate, who used to stress that lower estimates are much more deep and important than the upper ones (for positive numbers, of course), especially in combinatorial problems.

Indeed, I do consider Hungarian combinatorics as the opposite of all new conceptual ideas discovered during the last 100 years. This, obviously, does not mean that the results of Hungarian combinatorics cannot be approached conceptually. We have an example at hand: Furstenberg’s proof of Szemerédi theorem. It seems that it was obtained within a year of the publication of Szemerédi’s theorem (did not checked right now). Of course, I cannot exclude the possibility that Furstenberg worked on this problem (or his framework for his proof without having this particular application as the main goal) for years within his usual conceptual framework, and missed by only few months. I wonder how mathematics would look now if Furstenberg would be the first to solve the problem.

One cannot approach the area (not the results alone) of Hungarian combinatorics from any conceptual point of view, since the Hungarian combinatorics is not conceptual almost by the definition (definitely by its description by Gowers in his “Two cultures”). I adhere to the motto “Proofs are more important than theorems, definitions are more important than proofs”. In fact, I was adhering to it long before I learned about this phrase; this was my taste already in the middle school (I should confess that I realized this only recently). Of course, I should apply it uniformly. In particular, the Hungarian style of proofs (very convoluted combinations of well known pieces, as a first approximation) is more essential than the results proved, and the insistence on being elementary but difficult should be taken very seriously – it excludes any deep definitions.

I am not aware of any case when “heuristic” of Hungarian combinatorics lead anybody to conceptual results. The theorems can (again, Furstenberg), but they are not heuristics.

I am not in the business of predicting the future, but I see only two ways for Hungarian combinatorics, assuming that the conceptual mathematics is not abandoned. Note that still not even ideas of Grothendieck are completely explored, and, according to his coauthor J. Dieudonne, there are enough ideas in Grothendieck’s work to occupy mathematicians for centuries to come – the conceptual mathematics has no internal reasons to die in any foreseeable future. Either the Hungarian combinatorics will mature by itself and will develop new concepts which eventually will turn it into a part of conceptual mathematics. There are at least germs of such development. For example, matroids (discovered by H. Whitney, one of the greatest topologists of the 20th century) are only at the next level of abstraction after the graphs, but matroids is an immensely useful notion (unfortunately, it is hardly taught anywhere, which severely impedes its uses). Or it will remain a collection of elementary tricks, and will resemble more and more the collection of mathematical Olympiads problems. Then it will die out and forgotten.

I doubt that any area of mathematics, which failed to conceptualize in a reasonable time, survived as an active area of research. Note that the meaning of the word “reasonable” changes with time itself; at the very least because of the huge variations of the number of working mathematicians during the history. Any suggestions of counterexamples?



Next post: About Timothy Gowers.

Friday, April 5, 2013

The conceptual mathematics vs. the classical (combinatorial) one.

Previous post: Simons's video protection, youtube.com, etc.

This post is an attempt to answer some questions of ACM in a form not requiring knowledge of Grothendieck ideas or anything simlilar.

But it is self-contained and touches upon important and hardly wide known issues.

--------------------------------------------


It is not easy to explain how conceptual theorems and proofs, especially the ones of the level close to the one of Grothendieck work, could be at the same time more easy and more difficult at the same time. In fact, they are easy in one sense and difficult in another. The conceptual mathematics depends on – what one expect here? – on new concepts, or, what is the same, on the new definitions in order to solve new problems. The hard part is to discover appropriate definitions. After this proofs are very natural and straightforward up to being completely trivial in many situations. They are easy. Classically, the convoluted proofs with artificial tricks were valued most of all. Classically, it is desirable to have a most elementary proof possible, no matter how complicated it is.

A lot of efforts were devoted to attempts to prove the theorem about the distribution of primes elementary. In this case the requirement was not to use the theory of complex functions. Finally, such proof was found, and it turned out to be useless. Neither the first elementary proof, nor subsequent ones had clarified anything, and none helped to prove a much more precise form of this theorem, known as Riemann hypothesis (this is still an open problem which many consider as the most important problem in mathematics).

Let me try to do this using a simple example, which, perhaps, I had already mentioned (I am sure that I spoke about it quite recently, but it may be not online). This example is not a “model” or a toy, it is real.

Probably, you know about the so-called Fundamental Theorem of Calculus, usually wrongly attributed to Newton and Leibniz (it was known earlier, and, for example, was presented in the lectures and a textbook of Newton's teacher, John Barrow). It relates the derivatives with integrals. Nothing useful can be done without it. Now, one can integrate not only functions of real numbers, but also functions of two variables (having two real numbers as the input), three, and so on. One can also differentiate functions of several variables (basically, by considering them only along straight lines and using the usual derivatives). A function of, say, 5 variables has 5 derivatives, called its partial derivatives.

Now, the natural question to ask is if there is an analogue of the Fundamental Theorem of Calculus for functions of several variables. In 19th century such analogues were needed for applications. Then 3 theorems of this sort were proved, namely, the theorems of Gauss-Ostrogradsky (they discovered it independently of each other, and I am not sure if there was a third such mathematician or not), Green, and Stokes (some people, as far as I remember, attribute it to J.C. Maxwell, but it is called the Stokes theorem anyhow). The Gauss-Ostrogradsky theorem deals with integration over 3-dimensional domains in space, the Green theorem with 2 dimensional planar domains, and the Stokes theorem deals with integration over curved surfaces in the usual 3-dimensional space. I hope that I did not mix them up; the reason why this could happen is at the heart of the matter. Of course, I can check this in moment; but then an important point would be less transparent.

Here are 3 theorems, clearly dealing with similar phenomena, but looking very differently and having different and not quite obvious proofs. But there are useful functions of more than 3 variables. What about them? There is a gap in my knowledge of the history of mathematics: I don’t know any named theorem dealing with more variables, except the final one. Apparently, nobody wrote even a moderately detailed history of the intermediate period between the 3 theorems above and the final version.

The final version is called the Stokes theorem again, despite Stokes has nothing do with it (except that he proved that special case). It applies to functions of any number of variables and even to functions defined on so-called smooth manifolds, the higher-dimensional generalization of surfaces. On manifolds, variables can be introduced only locally, near any point; and manifolds themselves are not assumed to be contained in some nice ambient space like the Euclidean space. So, the final version is much more general. And the final version has exactly the same form in all dimension, but the above mentioned 3 theorems are its immediate corollaries. This is why it is so easy to forget which names are associated to which particular case.

And – surprise! – the proof of general Stokes theorem is trivial. There is a nice short (but very dense) book “Calculus on manifolds” by M. Spivak devoted to this theorem.  I recommend reading its preface to anybody interested in one way or another in mathematics. For mathematicians to know its content is a must. In the preface M. Spivak explains what happened. All the proofs are now trivial because all the difficulties were transferred into definitions. In fact, this Stokes theorem deals with integration not of functions, but of the so-called differential form, sometimes called also exterior forms. And this is a difficult notion. It requires very deep insights to discover it, and it still difficult to learn it. In the simplest situation, where nothing depends on any variables, it was discovered by H. Grassmann in the middle of 19th century. The discoveries of this German school teacher are so important that the American Mathematical Society published an English translation of one of his books few years ago. It is still quite a mystery how he arrived at his definitions. With the benefits of hindsight, one may say that he was working on geometric problems, but was guided by the abstract algebra (which did not exist till 1930). Later on his ideas were generalized in order to allow everything to depend on some variables (probably, E. Cartan was the main contributor here). In 1930ies the general Stokes theorem was well known to experts. Nowadays, it is possible to teach it to bright undergraduates in any decent US university, but there are not enough of such bright undergraduates. It should be in some of the required course for graduate students, but one can get a Ph.D. without being ever exposed to it.

To sum up, the modern Stokes theorem requires learning a new and not very well motivated (apparently, even the Grassmann did not really understood why he introduced his exterior forms) notion of differential forms and their basic properties. Then you have a theorem from which all 19th century results follow immediately, and which is infinitely more general than all of them together. At the same time it has the same form for any number of variables and has a trivial proof (and the proofs of the needed theorems about differential forms are also trivial). There are no tricks in the proofs; they are very natural and straightforward. All difficulties were moved into definitions.

Now, what is hard and what is difficult? New definitions of such importance are infinitely rarer than new theorems. Most mathematicians of even the highest caliber did not discover any such definition. Only a minority of Abel prize winner discovered anything comparable, and it is still too early to judge if their definitions are really important. So, discovering new concepts is hard and rare. Then there is a common prejudice against anything new (I am amazed that it took more than 15 years to convince public to buy HD TV sets, despite they are better in the most obvious sense), and there is a real difficulties in learning these new notions. For example, there is a notion of a derived category (it comes from the Grothendieck school), which most of mathematicians consider as difficult and hardly relevant. All proofs in this theory are utterly trivial.

Final note: the new conceptual proofs are often longer than the classical proofs even of the same results. This is because in the classical mathematics various tricks leading to shortcut through an argument are highly valued, and anything artificial is not valued at all in the conceptual mathematics.



Next post: The Hungarian Combinatorics from an Advanced Standpoint.

Wednesday, April 3, 2013

Simons's video protection, youtube.com, etc.

Previous post: What is mathematics?


Technically, this is a reply to a comment by Dmitri Pavlov. But it is only tangentially related to the discussion in Gowers's blog. At the same time, I see in it a good occasion to start a discussion of issues related to the infamous by now copyright law. This notion had some worthwhile components just 10 years ago. Now it looks like a complete nonsense obstructing progress. It does not even succeed in making big movie studios and music labels (the main defenders of extreme forms of the copyright law) richer. At the very least, no proof was ever offered.

------------------------------------------------------------------------


Dear Dmitri,

Thanks a lot. You certainly know that I am not an expert in software. I am not using UNIX, I am using Windows, and I have no idea what to do with your code. I definitely have the latest version of Adobe flash, or at least the previous one. I doubt that there is some version released in March which is required to deal with a video posted more than a year ago. 

The browsers I use most of the time, Firefox and Opera, have several extensions allowing downloading almost everything by just pressing a button and selecting the quality of the stream. These extensions don’t see any video content on Simons’s page.

I got the idea, and it looks like I will be able download these files even without your list. But your list will save me a lot of time, if I decide to do this (at the time, I am not inclined).

But this does not mean that files are not protected in the legal sense. Files are not protected if there is either a download button, or a statement like the following: "You are free to inspect our code and download our videos if you will find a way to do this". Your suggestion amounts to doing the latter without permission.

A third party software told me that it is able to see the video (actually, another one, much more interesting for me), but will not download it because this would be illegal. Moreover, the software stated that I have only one legal option to have the video in my computer: to take screenshots of each frame.

“If they were aware of this issue,
 they would almost certainly
 add HTML5 video elements and direct download links.”

You see, I contacted a mathematician who is to a big extent responsible for this whole program of interviews, posting of them, etc. He agreed that videos should be downloadable, and said that he will contact appropriate persons. I have no reasons not to trust him. So, the people at the Simons foundation are aware of this for more than a year and did nothing.

Even if these links would be on the page (I am not able to see them, and I don’t know what do you mean by “plain video URLs are embedded in the text.”), there are 26 files for Lovasz alone. This is a far cry from being convenient. I will need to use an Adobe video editor (which I accidentally do have on another computer by a reason completely independent from mathematics – most mathematicians don’t), and to glue them in one usable file. It would be even possible to add a menu with direct links to these 26 parts, very much like on a DVD or a Blu-ray disc. But, frankly, why should I to this? Is Simons’s salary (which he determines himself, being the president and the CEO of his company at the same time) not sufficient to make a small charitable contribution and hire a local student to do some primitive video-editing? His salary a year or two ago was over 2 billions per year. I did not check the latest available data.

Concerning youtube.com, I would like to say that if something looks like an active attempt to protect a video for you, it is not necessarily so for others. Personally, I don’t care how their links are generated. For me, it is enough to have a button (even three different!) at my browser which will find this link without my participation and will download the file, or even several simultaneously. Moreover, I doubt that youtube.com really wants to protect videos from downloading. They have a lot of 1080p (Full HD) videos, and I don’t know any way to see them in 1080px high window at youtube site. There are two choices: a smaller window, or a full screen. I haven’t seen a computer monitor with exactly 1080px height. Anyhow, the one I have is 1600px high, and upconverting to this size leads to a noticeable decrease of quality. The only meaningful option for 1080p content is to download it.

Buy the way, the Simons foundation site suffers from a similar, but much more severe problem. The size of the video window appears to be small and fixed. And they may stream into it 1080p content; this was the case with the video I wanted to watch a year+ ago. A lot of bandwidth is wasted. And my ISP hardly can handle streaming 1080p content anyhow.

Please, do not think that I am an admirer of youtube.com policies. Nothing there is permanent, i.e. everything potentially interesting should be downloaded. Their crackdown on the alleged (no proof is needed) copyright violators is the online version of the last year raids of the US special forces in several countries simultaneously.



Next post: To appear.

What is mathematics?

Previous post: D. Zeilberger's Opinions 1 and 62.


This is a reply to a comment by vznvzn to a post in this blog.


I am not in the business of predicting the future. I have no idea what people will take seriously in 2050. I do not expect that Gowers’s fantasies, or yours, which are nearly the same, will turn into reality. I wouldn’t be surprised that the humanity will return by this time to the Middle Ages, or even to the pre-historical level. But I wouldn't bet on this even a dollar. At the same time mathematics can disappear very quickly. Mathematics is an extremely unusual and fragile human activity, which appeared by an accident, then disappeared for almost a thousand years, then slowly returned. The flourishing of mathematics in 20th century is very exceptional. In fact, we already have much more mathematicians (this is true independently of which meaning of the word “mathematician” one uses) than society, or, if you prefer, humanity needs.

The meaning of words “mathematics”, “mathematician” becomes important the moment the “computer-assisted proofs” are mentioned. Even Gowers agrees that if his projects succeeds, there will be no (pure) mathematicians in the current (or 300, or 2000 years old) sense. The issue can be the topic of a long discussion, of a serious monograph which will be happily published by Princeton University Press, but I am not sure that you are interested in going into it deeply. Let me say only point out that mathematics has any value only as human activity. It is partially a science, but to a big extent it is an art. All proofs belong to the art part. They are not needed at all for applications of mathematics. If a proof cannot be understood by humans (like the purported proofs in your examples), they have no value. Or, rather, their value is negative: a lot of human time and computer resources were wasted.

Now, a few words about your examples. The Kepler conjecture is not an interesting mathematical problem. It is not related to anything else, and its solution is isolated also. Physicists had some limited interest in it, but for them it obvious for a long time (probably, from the very beginning) that the conjectured result is true.

4 colors problem is not interesting either. Think for a moment, who may care  if every map can be colored by only 4 colors? In the real word we have much more colors at our disposal, and in mathematics we have a beautiful, elementary, but conceptual proof of a theorem to the effect that 5 colors are sufficient. This proof deserves to be studied by every student of mathematics, but nobody rushed to study the Appel-Haken “proof” of 4-colors “theorem”. When a graduate student was assigned the task to study it (and, if I remember correctly, to reproduce the computer part for the first time), he very soon found several gaps. Then Haken published an amusing article, which can be summarized as follows. The “proof” has such a nature that it may have only few gaps and to find even one is extremely difficult. Therefore, if somebody did found a gap, it does not matter. This is so ridiculous that I am sure that my summary is not complete. Today it is much easier than at that time to reproduce the computer part, and the human part was also simplified (it consists in verifying by hand some properties of a bunch of graphs, more than 1,000 or even 1,500 in the Appel-Haken “proof”, less than 600 now.)

Wiles deserved a Fields medal not because he proved LFT; he deserved it already in 1990, before he completed his proof. In any case, the main and really important result of his work is not the proof of the LFT (this is for popular books), but the proof of the so-called modularity conjecture for nearly all cases (his students later completed the proof of the modularity conjecture for the exceptional cases). Due to some previous work by other mathematicians, all very abstract and conceptual, this immediately implies the LFT. Before this work (mid-1980ies), there was no reason even to expect that the LFT is true. Wiles himself learned about the LFT during his school years (every future mathematician does) and dreamed about proving it (only few have such dreams). But he did not move a finger before it was reduced to the modularity conjecture. Gauss, who was considered as King of Mathematics already during his lifetime, was primarily a number theorist. When asked, he dismissed the LFT as a completely uninteresting problem: “every idiot can invent zillions of such problems, simply stated, but requiring hundreds years of work of wise men to be solved”. Some banker already modified the LFT into a more general statement not following from the Wiles work and even announced a monetary prize for the proof of his conjecture. I am not sure, but, probably, he wanted a solution within a specified time frame; perhaps, there is no money for this anymore.


Let me tell you about another, mostly forgotten by now , example. It is relevant here because, like the 3x+1 problem (the Collatz conjecture), it deals with iterations of a simple rule and by another reason, which I will mention later. In other words, both at least formally belong to the field of dynamical systems, being questions about iterations.

My example is the Feigenbaum conjecture about iterations of maps of an interval to itself. Mitchell Feigenbaum is theoretical physicist, who was lead to his conjecture by physical intuition and extensive computer experiments. (By the way, I have no objections when computers are used to collect evidence.) The moment it was published, it was considered to be a very deep insight even as a conjecture and as a very important and challenging conjecture. The Feigenbaum conjecture was proved with some substantial help of computers only few years later by an (outstanding independently of this) mathematical physicists O. Lanford. For his computer part to be applicable, he imposed additional restrictions on the maps considered. Still, the generality is dear to mathematicians, but not to physicists, and the problem was considered to be solved. In a sense, it was solved indeed. Then another mathematician, D. Sullivan, who recently moved from topology to dynamical systems, took the challenge and proved the Feigenbaum conjecture without any assistance from the computers. This is quite remarkable by itself, mathematicians usually abandon problem or even the whole its area after a computer-assisted argument. Even more remarkable is the fact that his proof is not only human-readable, but provides a better result. He lifted the artificial Lanford’s restrictions.

The next point (the one I promised above) is concerned with standards Lanford applied to the computer-assisted proofs. He said and wrote that a computer-assisted proof is a proof in any sense only if the author not only understands its human-readable part, but also understands every line of the computer code (and can explain why this code does that is claimed it does). Moreover, the author should understand all details of the operations system used, up to the algorithm used to divide the time between different programs. For Lanford, a computer-assisted proof should be understandable to humans in every detail, except it may take too much time to reproduce the computations themselves.

Obviously, Lanford understood quite well that mathematics is a human activity.

Compare this with what is going on now. People freely use Windows OS (it seems that even at Microsoft nobody understands how and what it does), and the proprietary software like Mathematica™, for which the code is a trade secret and the reverse engineering is illegal. From my point of view, this fact alone puts everything done using this software outside not only of mathematics, but of any science.


Next post: To appear.

Monday, April 1, 2013

D. Zeilberger's Opinions 1 and 62

Previous post: Combinatorics is not a new way of looking at mathematics.

While this is a reply to a comment by  Shubhendu Trivedi in Gowers's blog, I hope that following is interetisting independently of the discussion there.


Opinion 1. Zeilberger admits there that he has no idea about the methods used even in his examples (the 4th paragraph).

He is correct that Jones polynomial is to a big extent a combinatorial gadget. Probably, he is not aware that this gadget applies to topology only if you have a purely topological theorem at your disposal (proved by Reidemeister in 1930s, it remains to be a non-trivial theorem). He may be not aware also of the fact that Jones polynomial did not led to solution of any problem of interest to topologists at the time. The proof of the so-called Tait conjecture was highly publicized, and many people believe that this was an important conjecture. Fortunately, there is a document proving that this is not the case. Namely, R. Kirby with the help of many other topologists compiled around 1980 a list of problems in topology. About 15 years later he published an updated and expanded version. Both editions consist of several parts, one of which is devoted to problems in knot theory. Tait conjecture is about knots and it is not in the 1980 list (by time Kirby started to prepare the new expanded list, it was already proved). Nobody was interested in it, and its solution has no applications.

Eventually, the theory of Jones polynomial and its generalizations turned into an independent self-contained field, desperately searching for connections with other branches of mathematics or at least with topology itself.

But D. Zeilberger should be aware that the Tutte polynomial belongs to the conceptual mathematics. It is one of the precursors of one of the main ideas of Grothendieck, namely, of K-theory. There is no reasons to think that Grothendieck was aware of Tutte's work, but Tutte polynomial is still an essentially a K-theoretic construction.

The Seiberg-Witten ideas have nothing to do with combinatorics. The Seiberg-Witten invariants are based on topology and some advanced parts of the theory of nonlinear PDE. In the last decade some attempts to get rid of PDE in this theory were partially successful. They involve some rather combinatorics-like looking pictures. I wonder if Zeilberger wrote anything about this. But the situation is essentially the same as with the Tutte polynomial. These quite remarkable attempts are inspired, not always directly, by such abstract ideas as 2-categories, for example. Note that the category theory is the most abstract part of mathematics, except, may be, modern set theory (which is a field in which only very few mathematicians are working).


Opinion 62. First, the factual mistakes.

Grothendieck did not dislike other sciences. In particular, at the age of approximately 42-46 he developed a serious interest in biology. Ironically, in the same paragraph Zeilberger commends I.M. Gelfand for his interest in biology.

Major applications of the algebraic geometry were not initiated by the “Russian” school, but the soviet mathematicians indeed embraced this field very enthusiastically. And initial applications did not involve any Grothendieck-style algebraic geometry.

More important is the fact that Zeilberger’s opinions are self-contradicting. He dislikes the abstract (in fact, the conceptual) mathematics, and at the same time praises the “Russian” school for applications of exactly the same abstract conceptual methods.

Zeilberger writes: “Grothendieck was a loner, and hardly collaborated”. Does he really knows at least a little about Grothendieck and his work? Grothendieck’s rebuilding of algebraic geometry in an abstract conceptual framework was a highly collaborative enterprise. He has almost no papers in algebraic geometry published by him alone. The foundational text EGA, Elements of Algebraic Geometry, has Grothendieck and Dieudonne as authors (in this order, violating the tradition to list the authors of mathematical papers in alphabetic order) and was written by Dieudonne alone. More advanced things were published as SGA, Seminar on Algebraic Geometry, and most of this series of Springer Lecture Notes in Mathematics Volumes is authored by Grothendieck and various collaborators. Some present his ideas, but don’t have him as an author. One of them is written by P. Deligne and authored by P. Deligne alone.

Zeilberger has no idea about what kind of youth was given to Grothendieck and presents some (insulting, I would say) conjectures about it. Grothendieck was always concerned with injustice done to other people, in particular within mathematics. His elevated sense of (in)justice eventually led him to (fairly misguided, I believe, but sincere and well-intentioned) political activity. He was initially encouraged by colleagues, who abandoned him when this enterprise started to require more than a lip service.

The phrase “...was already kicked out of high-school (for political reasons), so could focus all his rebellious energy on innovative math” is obviously absurd to everyone even superficially familiar with the history of the USSR. If someone was persecuted on political grounds, then (he could by summarily executed, but at least) any mathematical or other scientific activity would be impossible for him for life. There would be no ways to be a professor of Moscow State University, or taking part in the soviet atomic-nuclear project.

Surely, Gelfand said something like Zeilberger writes about the future of combinatorics. I never was at the Gelfand seminar, neither in Moscow, nor in Rutgers. But there are his publications, from which one can get the idea what kind of combinatorics Gelfand was interested in. Would Zeilberger attempted to read any of these papers, he would hardly see there even a trace of what is so dear to him. All works of Gelfand are highly conceptual.

Finally, it is worth to mention that Gelfand always wanted to be the one who determines the fashion, not the one who follows it. Of course, I see nothing wrong with it. In the late 60ies he regretted that he missed the emergence of a new field: algebraic and differential topology. He attempted to rectify this by two series of papers (with coauthors, by this time he did not published anything under only his name), one about cohomology of infinitely dimensional Lie algebras, another about a (conjectural) combinatorial definition of  Pontrjagin classes (a basic notion in topology). It is very instructive to see what was a “combinatorial definition” for I.M. Gelfand.


Next post: What is mathematics?