This is a reply to a comment by vznvzn to a post in this blog.
I am not in the business of predicting the future. I have no idea what people will take seriously in 2050. I do not expect that Gowers’s fantasies, or yours, which are nearly the same, will turn into reality. I wouldn’t be surprised that the humanity will return by this time to the Middle Ages, or even to the pre-historical level. But I wouldn't bet on this even a dollar. At the same time mathematics can disappear very quickly. Mathematics is an extremely unusual and fragile human activity, which appeared by an accident, then disappeared for almost a thousand years, then slowly returned. The flourishing of mathematics in 20th century is very exceptional. In fact, we already have much more mathematicians (this is true independently of which meaning of the word “mathematician” one uses) than society, or, if you prefer, humanity needs.
The meaning of words “mathematics”, “mathematician” becomes important the moment the “computer-assisted proofs” are mentioned. Even Gowers agrees that if his projects succeeds, there will be no (pure) mathematicians in the current (or 300, or 2000 years old) sense. The issue can be the topic of a long discussion, of a serious monograph which will be happily published by Princeton University Press, but I am not sure that you are interested in going into it deeply. Let me say only point out that mathematics has any value only as human activity. It is partially a science, but to a big extent it is an art. All proofs belong to the art part. They are not needed at all for applications of mathematics. If a proof cannot be understood by humans (like the purported proofs in your examples), they have no value. Or, rather, their value is negative: a lot of human time and computer resources were wasted.
Now, a few words about your examples. The Kepler conjecture is not an interesting mathematical problem. It is not related to anything else, and its solution is isolated also. Physicists had some limited interest in it, but for them it obvious for a long time (probably, from the very beginning) that the conjectured result is true.
4 colors problem is not interesting either. Think for a moment, who may care if every map can be colored by only 4 colors? In the real word we have much more colors at our disposal, and in mathematics we have a beautiful, elementary, but conceptual proof of a theorem to the effect that 5 colors are sufficient. This proof deserves to be studied by every student of mathematics, but nobody rushed to study the Appel-Haken “proof” of 4-colors “theorem”. When a graduate student was assigned the task to study it (and, if I remember correctly, to reproduce the computer part for the first time), he very soon found several gaps. Then Haken published an amusing article, which can be summarized as follows. The “proof” has such a nature that it may have only few gaps and to find even one is extremely difficult. Therefore, if somebody did found a gap, it does not matter. This is so ridiculous that I am sure that my summary is not complete. Today it is much easier than at that time to reproduce the computer part, and the human part was also simplified (it consists in verifying by hand some properties of a bunch of graphs, more than 1,000 or even 1,500 in the Appel-Haken “proof”, less than 600 now.)
Wiles deserved a Fields medal not because he proved LFT; he deserved it already in 1990, before he completed his proof. In any case, the main and really important result of his work is not the proof of the LFT (this is for popular books), but the proof of the so-called modularity conjecture for nearly all cases (his students later completed the proof of the modularity conjecture for the exceptional cases). Due to some previous work by other mathematicians, all very abstract and conceptual, this immediately implies the LFT. Before this work (mid-1980ies), there was no reason even to expect that the LFT is true. Wiles himself learned about the LFT during his school years (every future mathematician does) and dreamed about proving it (only few have such dreams). But he did not move a finger before it was reduced to the modularity conjecture. Gauss, who was considered as King of Mathematics already during his lifetime, was primarily a number theorist. When asked, he dismissed the LFT as a completely uninteresting problem: “every idiot can invent zillions of such problems, simply stated, but requiring hundreds years of work of wise men to be solved”. Some banker already modified the LFT into a more general statement not following from the Wiles work and even announced a monetary prize for the proof of his conjecture. I am not sure, but, probably, he wanted a solution within a specified time frame; perhaps, there is no money for this anymore.
Let me tell you about another, mostly forgotten by now , example. It is relevant here because, like the 3x+1 problem (the Collatz conjecture), it deals with iterations of a simple rule and by another reason, which I will mention later. In other words, both at least formally belong to the field of dynamical systems, being questions about iterations.
My example is the Feigenbaum conjecture about iterations of maps of an interval to itself. Mitchell Feigenbaum is theoretical physicist, who was lead to his conjecture by physical intuition and extensive computer experiments. (By the way, I have no objections when computers are used to collect evidence.) The moment it was published, it was considered to be a very deep insight even as a conjecture and as a very important and challenging conjecture. The Feigenbaum conjecture was proved with some substantial help of computers only few years later by an (outstanding independently of this) mathematical physicists O. Lanford. For his computer part to be applicable, he imposed additional restrictions on the maps considered. Still, the generality is dear to mathematicians, but not to physicists, and the problem was considered to be solved. In a sense, it was solved indeed. Then another mathematician, D. Sullivan, who recently moved from topology to dynamical systems, took the challenge and proved the Feigenbaum conjecture without any assistance from the computers. This is quite remarkable by itself, mathematicians usually abandon problem or even the whole its area after a computer-assisted argument. Even more remarkable is the fact that his proof is not only human-readable, but provides a better result. He lifted the artificial Lanford’s restrictions.
The next point (the one I promised above) is concerned with standards Lanford applied to the computer-assisted proofs. He said and wrote that a computer-assisted proof is a proof in any sense only if the author not only understands its human-readable part, but also understands every line of the computer code (and can explain why this code does that is claimed it does). Moreover, the author should understand all details of the operations system used, up to the algorithm used to divide the time between different programs. For Lanford, a computer-assisted proof should be understandable to humans in every detail, except it may take too much time to reproduce the computations themselves.
Obviously, Lanford understood quite well that mathematics is a human activity.
Compare this with what is going on now. People freely use Windows OS (it seems that even at Microsoft nobody understands how and what it does), and the proprietary software like Mathematica™, for which the code is a trade secret and the reverse engineering is illegal. From my point of view, this fact alone puts everything done using this software outside not only of mathematics, but of any science.
Next post: To appear.
Dear sowa,
ReplyDeleteI have been reading your opinions in Gowers' blog and here in your blog. I have to say that I agree with many of your arguments. In particular, I found interesting your claim about the new and revolutionary approach that Grothendieck introduced in mathematics. I totally agree with you in this part and also in the fact you mention that the mathematical community has not accepted Grothendieck's view completely. You look to me as a knowledgeable mathematician and deep thinker (even in the parts I disagree with you), so I would like to ask you: Why do you think that most mathematicians have been reticent to accept Grothendieck's approach? I think Grothendieck himself tried to analyze this phenomenon in "Recoltes et Semailles" when he talked about "his burial". Many people took his words as attacking a specific individuals motivated by grudges and paranoia. However, I think he was precisely discussing how, in general, the mathematical establishment has not wanted to appreciate his revolutionary visions in its big scope. What is you opinion about Grothendieck's thoughts in R&S? By the way, I find interesting that in his essay on the "two cultures", Tim Gowers does not mention Grothendieck's thoughts, when the discussion about theory-builders and problem-solvers is beautifully discussed in R&S. Probably, Tim Gowers does not know R&S, but anyway this would be a question for him not for you.
Dear ACM,
ReplyDeleteI am very happy to have such a visitor!
I thought about your question, and I still don't know how to answer it (without offending too many people).
First, I believe that Grothendieck wrote about a much more subtle question than the question at hand. I must admit that I did not read "Recoltes et Semailles" systematically and may be wrong. Anyhow, it is quite obvious that the French school in algebraic geometry continued to work with his notions and in a style close to his, but not exactly the same. As is well known, Grothendieck was very disappointed by Deligne’s proof of the last of Weil conjectures. The proof was not conceptual enough for him. This disappointment was predictable, he outlined what he believed is the proper way to complete the proof of Weil conjectures. He made several specific conjectures, called now the “standard conjectures”, from which Weil conjectures would follow. They are neither proved nor disproved till now.
Why his own students abandoned his line of research? I think that only his students may answer this question, and since they did not answered it for decades, we should not expect that they will answer it now.
As of why the whole mathematical community did not embraced Grothendieck’s methods, one may suggest a rather trivial explanation: his methods do not produce such convoluted proofs as the Hungarian combinatorics does, but it is very difficult even to apply them, not to say about developing new ones. Learning new concepts is much more difficult than playing Lego with the old ones. My Ph.D. advisor used to say that, according to his experience, it is impossible to learn homology theory if one did not learned it as a student. He was recognized as an excellent teacher and lecturer by everybody. Up to now, the theory of spectral sequences (which some people consider as outdated long ago) is not taught in details to many graduate students in algebra, algebraic geometry, or topology. It is considered to be too difficult. This is a pre-Grothedieck notion.
One may try to count how many mathematicians are able to do anything depending on these ideas or the ideas of Grothendieck. I expect that only tiny minority is able. I wrote some story here, but then deleted it. The characters may recognize themselves. I will try to find some answers. My problems here results from the fact that I am familiar with difficult result obtained in the classical style, and with deep results of Grothendieck-style mathematics. It is so obvious to me that Grothendieck-style results are much more deep and difficult, so I have a hard time trying to explain something so obvious.
Dear Sowa,
ReplyDeleteLet me make an additional comment. Probably you know that these days a conference about Simplicity (in mathematics and arts) is running in NY (here the link http://www.s-i-m-p-l-i-c-i-t-y.org/) I knew about this event because one of the speakers is friend of mine. I bring the word simplicity at hand because of you wrote: "his methods do not produce such convoluted proofs as the Hungarian combinatorics does". I think the conceptual approach is simple. By simple I do not mean something trivial or not difficult. At this moment, it is quite hard to me giving a right definition of when a mathematical theory could be call simple, but certainly one of the things is that it should avoid convoluted proofs and strange tricks. I know that what I am writing could be a little bit vague (I have to elaborate my ideas and will do it!), but I just want to emphasize that one of the goals of the mathematics as a human activity is the search of simplicity, the simplicity given by the beauty. The mathematics as an art and not a mere technical device.
Dear ACM,
DeleteThis is not very easy to explain. Especially because I hardly have any idea of what your mathematical background is.
I attempted to do this in the post Conceptual mathematics vs. the classical (combinatorial) one.
Dear Sowa,
DeleteConcerning my mathematical background. I am finishing my Ph.D studies at this moment. I have been working in quantum cohomology and mirror symmetry. So, I am familiar with algebraic geometry, symplectic geometry, derived categories and representation theory. I am really far away to be an expert, but I think I can follow arguments in those areas. On the other hand, I am a big admirer of Grothendieck's viewpoint of mathematics, in particular, the emphasis he puts in the importance of having big visions. Sadly, nowadays most of the mathematicians are only interested in publications (in the quantity not quality) and they do not like to discuss about conceptual works, maybe because this kind of works can not lead to a fast publication. Again sadly, for my current circumstances (as a student) I have to play this game. I read your post on Stokes theorem and found it very interesting. I would like to write some comments, but I have to think a little bit more about some issues.
Dear ACM,
DeleteWell, apparently I went deeper in the past than you need. Still, I believe that this is a good example. And it has the advantage of being, I hope, understandable by much more people than some example based on work of Grothendieck, or some later work. By the way, Matthew Emerton posted a very interesting comment related to the topic of this conversation.
I hope that I will find some time to write a "theoretical" (not based on examples, at least directly) explanation of this mystery: how abstract proofs can be easier and more difficult at the same time. Probably, it will be another post.
I agree with the rest. All this is a new phenomenon, imposed on us (mathematicians) by external forces. The US universities changed dramatically during the last 20 years. During 1990ies this transformation was hardly affecting researchers, except of one or two attempts to abolish the tenure system. These attempts to use brute force failed, but the transformation continued. The main change is already in place: the universities are not governed anymore by the faculty, as it was in the past; they are governed by career administrators. It is sad that mathematicians surrendered (almost) without any resistance. Even the tenure system was saved by other people, mostly humanitarians, I believe.
All this was predicted by the “New York Times” already in 1991.
Of course, you have to play by the rules. This is the only way to have non-zero chanced to get a job which you, hopefully, will like at least the first ten years. It is hard to guess what will happen in a more distant future.
hi....
ReplyDeleteas surely you realize, you didnt quite quote gauss correctly. but this did give me the idea to put the quote correctly on my site. also, your ref to zeilbergers opinions re grothendiek and combinatorics in the other blog post was great. I skimmed the AMS files on grothendiek he cited-- many pages. quite the mathematician. enjoy the AMS writeups and have linked to several on my site.
was not aware too much of the controversy within mathematics between the different branches (conceptual vs combinatoric etc) until delving into gowers blog & some of your replies. it almost seems to be a political war by some on the level of left vs right, republican vs democrat. but zeilberger turns up an apparently old quote. "combinatorics is the slums of topology", saith whitehead. hilarious. I wonder roughly when he said that. it is probably an old controversy.
as for convincing you about the value of the collatz conjecture, its a losing battle, you obviously dont have much of an open mind there and many would share your view that it is insignificant and no proof of it could lead to anything substantial. I do intend to write more on my blog, stay tuned for that.
as for a disappointment in the opacity of the 4-color theorem. I have not studied it much either but it is understandable by experts who take it seriously (which you are clearly not in that camp). yes agreed computer assisted proofs pose unique problems of validity and verification but they are not insurmountable and not so unlike the same issue(s) with complex solely-human-generated proofs.
the point is that many problems are like icebergs where the problem statement is only the visible part of the iceberg. it is not a priori clear which problems will lead to deep, sometimes extraordinary theory and which others are merely unsolved curiosities such that solutions would not move theory forward much. FLT is an example of this. erdos liked to work on problems like this, was good & talented at it, and when you devalue his [highly, supremely mathematical!] work, its rather shameful. apparently you dont like math with too many applications!
the feigenbaum constant problem which you mention is interesting to me and encountered it years ago in dynamical/emergent systems literature & definitely admired it. agree there is an interplay between physicists & mathematicians that is fruitful for both esp when neither imagines they occupy the superior (or what a therapist might call the "one-up") position....
you have an amazing grasp of 20th century math history, but my question is, maybe you could cite some area of research or paper less than a decade old as one that you admire. otherwise, you seem to be stuck in a time warp, the distant past. in your cantankerous writing its not even clear that you like *any* recent mathematics....
To vznvzn:
DeleteOf course, I perfectly realize that I did not quote Gauss exactly. This opinion of Gauss is widely known, I know it only from secondary sources, which are not even in English (did you found the original, either German or Latin, text?), and quoted from memory. But I doubt that I distorted his opinion.
Yes, AMS did a good job. It is too bad that AMS flatly rejects suggestions to publish an English translation of his long autobiographical text. It is not published in French either. Only a Russian translation of the first two chapters was published. The reason behind this is that he names the people he writes about.
You are correct; there is a fairly old controversy. Sometimes it ever spills out into actual politics. A very famous, but not existing French mathematician N. Bourbaki (this is the pen name for a group of French mathematicians, the membership in which was kept as a secret for decades) published several books of tract aiming at an exposition of basic facts of all mathematics. In order to keep the project manageable, he adopted a very abstract approach (A. Grothendieck was a member of the Bourbaki group for a while), and relatively new way of exposition. The books were very successful. Some people decided that the whole mathematical education, almost from the elementary school, should be reformed in a similar way, despite it is stated in the preface to every book by N. Bourbaki that books are not suitable as textbooks even for future mathematicians (but, reportedly, P. Deligne did learned mathematics from them). A state-sponsored reform along these lines was implemented first in France, then in USSR. Probably, even in the US to some extent (in US the people are concerned with other issues). Of course, the reforms were a disaster (first of all because the available teachers were not able to understand the new textbooks themselves, much less to use them in teaching). Naturally, all this lead to hot public debates. The solution was easily found: to blame N. Bourbaki for all problems afflicting the school education.
There are other highly publicized debates, for example, the polemics of one of the best soviet mathematicians V.I. Arnold with all French mathematicians. Eventually it even led to public debates between him and J.-P.Serre in College de France, if I remember correctly. I should note that Arnold was (he passed away recently) a conceptual mathematician who liked topology and algebraic geometry a lot and used them in his work. Amazingly, he visited France for a year in mid-1960ies, where he got the inspiration for all his subsequent work.
Continued in the next comment.
I am not inclined to discuss the merits of not yet existing results. You can change my opinion about 3x+1 problem in only one way: solve the problem, and show that the solution leads to a progress in questions firmly established as important ones.
DeleteAccording to the experts, human-understandable part of the “solution” of the 4-colors problems is small and did not lead to any noticeable progress even in graph theory.
The validity and verification is a smoke screen constantly used to divert the attention of the public from real problems. As I said, nobody cares if 4 colors are indeed enough. When a mathematician decides to study a paper or a book, he or she has no intent to verify correctness. Mathematicians are looking for new ideas, sometimes just to enjoy them, sometimes to use them. If I see an idea useful for my own work, I don’t care if the rest of the paper is correct. I don’t care even if it correct at this particular place. If I will be able to use that idea in my situation, I will be very happy. If I am interested in the correctness of a result (I may need to use it in my research), I am not going to verify proofs line by line. I will look at the general picture: is it convincing or not. If it is, an expert will be able to reconstruct the proofs.
One may put this into other form: mathematics is about clarity and understanding, not about verification. (There is a very relevant quote from William Thurston in this blog.)
Of course, when we see a bunch of ice in the ocean, we cannot be sure if it the top of an iceberg, or an ice-covered island, say. Still, we can make some educated guesses based on the weather conditions, which part of the ocean we are in, etc. Good problems are very rare. FLT is not an example. Clearly, A. Wiles is an ambitious person who wanted to solve a famous problem. He could use another one. For example, the modularity conjecture – the one he actually proved (with the help of R. Taylor). FLT is an immediate corollary thanks to the work of other people.
I am not aware of any applications of Paul Erdős work, but easily believe that they exist. They should exist in CS by the very nature of both fields.
Once I went to an expository talk of a former Hungarian mathematician, who is now a well know computer scientist in the US. To my big surprise, he did not even mention any combinatorics, Hungarian or not. His main idea was the usefulness of topology and (abstract) algebraic geometry for the computer science. It is far from being clear which sort of mathematics is more important for applications. As soon a scientist gets enough courage to study conceptual mathematics, she or he founds in it something useful. Unfortunately, the education is designed in such a way that scientists usually are not exposed to such mathematics at all.
You think that ten years ago is a distant past? I am trying to use as old examples as possible (cf. my new post about Stokes theorems). Otherwise nobody in Gowers’s blog will understand me. I admire, for example, the current work of J. Lurie from Harvard. Does this help you?
On the other hand, it very well may be the case that the recent golden age of mathematics has ended, J. Lurie is only an exception, and will not see such conceptual progress as in 1940-1980 for centuries. This will be certainly the case if Gowers succeeds in his attempt to take control of the whole mathematical community.
All proofs belong to the art part. They are not needed at all for applications of mathematics.
ReplyDeleteThis is blatantly false. The Curry-Howard correspondence proves that computer programs are proofs. Therefore proofs are emphatically needed for MANY applications of mathematics.
Software verification also critically relies on theorem proving.
Well, I am not aware of any software which always works according to specifications. Have no idea what was proved about blogspot.com software, or Windows, or whatever. In fact, the essential parts of it are proprietary, the reverse engineering is illegal and therefore even a proof would be possible, it would be illegal.
DeleteAt the same time it is possible to use blogspot.com and even Windows, without any proofs and without any knowledge of how they are working. There is no need for practice to prove that a computer program always does that it is supposed to do. It sufficient to test it in various situations. From time to time it will fail, and then humans intervene.
Probably, you use not only computers, but also cars and planes. Nobody proved in any sense that they will not crash. In fact, they do crash from time to time, so no proof is possible. But we consider them as very efficient and reliable means of transportations (regular flights of big passenger jets being more reliable than cars).
This is the main reason why no applications of any science needs proofs. What would be the purpose of proving that some software is correct if the hardware may fail? Or just some 0 on a hard drive may turn into 1 as a result of being hit by some radiation coming from the deep space?
Finally, the statement "computer programs are proofs" does not belong to mathematics and hence cannot have a mathematical proof. The models of the mathematical reasoning studied in the mathematical logic and related subject are just that: models. One can prove something about such models, but one cannot prove anything about mathematics.
Probably, you use not only computers, but also cars and planes. Nobody proved in any sense that they will not crash.
DeleteDepending on what you mean by "crash", you are quite wrong about that. Verification and proof systems have a long history, and are only becoming more common as the theorem proving tools improve.
What would be the purpose of proving that some software is correct if the hardware may fail?
Sorry to say, but this is a very ignorant statement. Firstly, hardware also undergoes rigourous verification to ensure it meets its specification, which is yet further proof that proof theory is applied everywhere. Look up VHDL and similar design languages.
Furthermore, was all of mathematics suddenly pointless once Russell demonstrated that set theory was inconsistent? Also, why bother developing a proof of a conjecture at all since some humans are going to fail to properly understand and apply the results? It's frankly a ridiculous argument.
Finally, the statement "computer programs are proofs" does not belong to mathematics and hence cannot have a mathematical proof.
A mathematician by the name of Haskell Curry actually provided just such a proof. Perhaps you've heard of him? The proof is actually named after him, and I provided the name for you: Curry-Howard correspondence, sometimes known as the Curry-Howard isomorphism.
Dear Sandro Magi,
DeleteIt doesn't seem that we may have a meaningful discussion. The reason is that we are working in almost completely disjoint fields, and, partially as a corollary, have very different styles of thinking and writing. You many find few words about my style in comments to that Gowers's post.
Still, I am going to try. I expect some comparable efforts from you.
I suggested to you some broad viewpoint on the importance of proofs for the advancement of humanity (note that advances in technology are not always advancing the humanity, but that a big and important topic by itself). You give me as a reply references to two technical papers in your area. Judging from your Google+ account, you are not even a researcher by profession, so they are, most likely, very technical. Do you really think that I will read them? If you do, please, go and read Deligne’s papers on Weil's conjectures, for example (the author and the topic are negotiable to some extent, but everything related to the mathematical/formal logic and the CS is excluded). When you succeed, the understanding of the cited papers will be my top priority.
Apparently, you want from me some definition of common English words. OK. By an airplane crash I understand a hull loss accident resulting in a high number of fatalities (relatively to the capacity of the plane). By a car crash I understand a car accident resulting in the death or irreversible disability of at least one person.
By a software working not according to the specifications, I understand software with published detailed description of what it is supposed to do, and not doing this relatively often. For example, all Microsoft software. If you want a concept parallel to the plane crash, then by crash of Windows OS I understand any situation when OS “freezes” and the only way out is to pull out the power plug.
I do not care how you verify that hardware meets specification (I would appreciate the definition of the word “verification” which you use). I care even less about computer languages used, very much in the same way as a layperson is not interested in the theory of aircraft design. For me, the only important thing is if it works or not. Very often it doesn’t and needs a completed replacement (I do not mean the whole computer, but a hard drive, the video card, etc.). Whatever you and your colleagues verified about it, it crashes. This means that no proof that it is going to work may exist: it is a wrong statement.
Contined in the next comment.
Continuation.
DeleteI am around in this world long enough to be totally bored by all these popular discussions of Russell paradox. Their purpose is to sell books or to “defeat” an opponent. In principle, it is another worthwhile topic for discussion, but who will read it? Too complicated for a layperson, too uninteresting for a mathematician.
Russell did not demonstrate that the set theory is inconsistent. First of all because the notion of a consistent theory did not existed at the time. The set theory in the sense we now understand it also did not existed. Russell showed that an unlimited application of a principle of sets formations leads to contradiction. G. Cantor, who discovered the set theory, was aware of this. If one uses Cantor definition and his explanation, Russell paradox shows the set leading to Russell’s paradox is not a set in Cantor’s sense. But this is clear even without “Russell’s argument” (which is, of course, not due to Russell, and was known to Ancient Greeks).
At the time Russell published his paradox, essentially all of mathematics was independent of set theory and therefore it wasn’t affected at all. It is still independent of most of set theory; the border where one needs to clarify the notion of a set was approached only two or three times.
Why to bother to prove some conjecture? I have no idea. Some people find this activity pleasant, entertaining, and leading to a steady income. Some like the sportsmanship involved. Your argument is indeed “frankly ridiculous”, please do not attribute it to me.
Yes, I heard about Curry. I even browsed through some of his books; they turned out to be slow going and by this reason boring. It does not matter after whom something is named, the attribution is usually wrong. So, please, give me the statement, or at least a good reference. I am not going to Google it.
Do you really think that I will read them? If you do, please, go and read Deligne’s papers on Weil's conjectures, for example
DeleteAll I expect from you is that when you make declarative statements, you know what you're talking about. I provided numerous counterexamples demonstrating that your statements about the applications of proofs is false in general. Whether you dig into the details as to precisely why it is false is up to you.
Apparently, you want from me some definition of common English words. OK. By an airplane crash I understand a hull loss accident resulting in a high number of fatalities (relatively to the capacity of the plane). By a car crash I understand a car accident resulting in the death or irreversible disability of at least one person.
Which is outside the domain of software. I've provided several analogies to point out why this argument is fallacious, so I'll try one more time: you might as well require that the Goldbach conjecture be proved before accepting the proof of Fermat's last Theorem. The one has nothing to do with the other.
By a software working not according to the specifications, I understand software with published detailed description of what it is supposed to do, and not doing this relatively often.
I know what you meant, and I'm telling you that plenty of software does have such formal specifications. That all software does not have an explicit formal specification does not imply that none of them do.
Whatever you and your colleagues verified about it, it crashes. This means that no proof that it is going to work may exist: it is a wrong statement.
No, it, meaning the software, did not crash, the environment in which the software operates "crashed". Once again, this is a category error. You say that mathematics is a human activity, therefore it takes place in the human brain. You might as well conclude that mathematical proofs are all invalid because some people believe inconsistent propositions, thus rendering all propositions true, even false ones. It's a completely fallacious argument.
The properties of the machine/environment on which a program runs form an explicit axiomatic basis, just like mathematics has an implicit axiomatic basis that requires storing, retrieving and manipulating syntax via rules, ie. a computer like a brain. You cannot declare that all proofs are useless because some brains/machines are faulty, you have to prove that ALL brains/machines are faulty to prove your conjecture. You have not done so.
So, please, give me the statement, or at least a good reference [to the Curry-Howard correspondence]. I am not going to Google it.
As with anything, the very first result from Google is the decent Wikipedia page with plenty of references. The result basically establishes a formal correspondence between proof systems and computation, ie. that every proof system has a corresponding computational character. Thus, programming is theorem proving (cf. Theorems for Free!), though some of the theorems aren't very interesting.
Thus, given programs are proofs, we can conclude that programming is theorem proving. Therefore, your statement that proofs are not used in applications is false.
I would appreciate the definition of the word “verification” which you use
DeleteOh, and "verification" means exactly what it sounds like: that there are desirable properties of a system that we verify are present in a real implementation via formal methods.
Dear Sandro Magi,
DeleteI wrote a fairly detailed reply to you, but some “verified” soft crashed and the text disappeared. Probably, I should blame the “environment” (parenthetically, I would like to say that your attempt to shift the blame from software to the “environment” did not succeeded), United Nations, Starways Congress and who knows whom else for this. Anyhow, the text disappeared, and I do not feel anymore that it makes any sense to refute every wrong argument made by you. There are too many of them.
Actually, my initial reaction to your last three comments was to mark them as spam. This would violate my very old principles for online discussions, so I published them.
I would like to give you an advice. When you encounter a statement which sounds to you as an outlandish one (like my “proofs are not needed for applications”), try to understand what its author had in mind. May be you will agree with it after explanations. And never try to refute it by no less outlandish statements (like your “programs are proofs”). Also, do not replace “Do you know how to use Google?” by “There is a Wikipedia article with references about this topic”. Your attempt to do the latter only convinced me that you do not know what you are talking about.
This blog is not devoted to any internal issues of CS. It is now clear that you do not know what a proof is. I have no intention to try to sort out if you know what a program is.
If you will follow my advices, you will be welcome here. If not, not. I will address the arguments in your comments which are not addressed yet only if I will be asked for an explanation by somebody who, by my own judgment, has a chance to benefit from such an explanation. A software engineer or an expert in CS and related areas usually do not. Still, I know, to some degree personally, two outstanding experts in CS with whom I could have a really productive discussion.
Good luck
I wrote a fairly detailed reply to you, but some “verified” soft crashed and the text disappeared.
DeleteOh? And what verified software would this be? I'm quite curious.
Anyhow, the text disappeared, and I do not feel anymore that it makes any sense to refute every wrong argument made by you.
Go ahead and refute just one then. I'll wait.
And never try to refute it by no less outlandish statements (like your “programs are proofs”).
Outlandish? I see, so a long-celebrated proof in computer science, and by a famous mathematician no less, is outlandish. I see I'm wasting my time here, as you've even gone the route of ad hominem attacks. Clearly fallacious reasoning is just endemic to your thinking so I'll leave this at the very obvious fact that your claims are false, that I provided plenty of citations demonstrating their falseness, that you have not provided a single reference in support of your claims, and yet you continue to cling to your incorrect beliefs. So be it.
Still, I know, to some degree personally, two outstanding experts in CS with whom I could have a really productive discussion.
Go ahead and ask your experts if I know what I'm talking about and whether the statements I objected to in your post were correct and unambigious. I look forward to this discussion.
As for your claims whether software engineers and/or experts in CS and related areas know what they are talking about, well you hadn't even heard of the Curry-Howard correspondence before I mentioned it, probably the most famous result linking formal systems and computation. Colour me skeptical of your own qualifications to comment on other people's expertise.
I am not in the business of satisfying curiosity about my personal life, in particular, about what software and hardware I use.
DeleteDon’t wait, I will not say anything: I clearly stated so. As an exception, I will say few words about Curry-Howard in a separate comment. It may happen that a post will be needed. You did not provide any useful citation. Even Amazon.com is more useful, not to say about Google.
Please, do not tell me what I should do, in particular, whom I should ask about what. This will lead to a permanent ban. You have been warned.
Nowhere had I claimed that I am an expert in CS. I claimed the opposite: I am interested in it just a little bit more than a typical owner of a desktop or laptop computer. Why do you expect me to know anything about some technical theorem related to it? I did not made any claims about CS experts and software engineers in CS. I told you about my personal experience: only rarely such an expert is capable to have a productive discussion with me. Your case is a good example: without any grounds you expect me to know CS; you are unwilling or incapable to understand “big picture” type of communication.
I am not in the business of satisfying curiosity about my personal life, in particular, about what software and hardware I use.
DeleteSo you've made another claim without evidence of any kind, ie. that you were using verified software that lost your work, but we're just supposed to take your word for it.
Your case is a good example: without any grounds you expect me to know CS; you are unwilling or incapable to understand “big picture” type of communication.
Nope, I didn't expect you to know anything about CS. In fact, my very first post was just a heads up that your claim as worded wasn't strictly true, as proven by computer science in the Curry-Howard correspondence; in other words, your "big picture" was incorrect.
I gave you everything you needed to correct your position in my very first post, but you insisted that you were right, and further, that you weren't even going to do any work whatsoever to verify that what you claimed to be true was actually true. The burden of proof is on those making claims to support those claims.
It seems clear at this point that you aren't interested in truth though, only in being right. Ban me if you like, but you're the one making claims unsupported by evidence. And while I'm not telling you what to do, if you really are interested in mathematics and you really think I'm full of crap, then you ought to consult your CS experts on our discussion and whether the original statement that I objected to was strictly true as worded. Don't take the word of some random internet poster who provided you with numerous references, please do consult your own sources.
As for the usefulness of my citation regarding Curry-Howard, I provided a Wikipedia link because you are not a CS expert. The Wikipedia page has links to all sorts of relevant academic literature discussing the Curry-Howard correspondence, including lectures from such greats a Moggi and Sorenson. Frankly, you couldn't ask for a better introduction, but hey, I guess you can't please everyone.
You are an expert in software; at the very least your Google+ profile says so. I said that I hardly know more than an average person using desktop or laptop computers. Isn't it obvious what software I am using to write comments here? Windows OS, browser, and Google blogger GUI. I will not contest a claim that they are verified (I put “verified” in quotation marks, you could notice this); I am sure they aren't. My point is (and was from the very beginning) that we use not verified software all the time. We (typical users) are using a lot of software from rather different parties, from individuals writing it for fun to the giants of the industry. We do not care about proofs of its correctness; we only expect that it does not serious mistakes too often. For example, if this comment disappears, I will be upset, but if Google uses for it a font different from the specified one (it does sometimes), I will remain calm.
DeleteI satisfied you curiosity about the soft despite my intention not to do this. Could you reciprocate and do me a favor? Could you tell me if you feel that the above paragraph is relevant to the discussion, and if it is, how and how you will state the main idea behind it?
The rest of my reply will follow after your answer or after you will not reply for too long.
Could you tell me if you feel that the above paragraph is relevant to the discussion, and if it is, how and how you will state the main idea behind it?
DeleteIt's only relevant insofar as verified software is an existence proof that your original claim was false. I never commented on the topic of this article, I only objected to a single claim in it, namely that the art component of mathematics (proofs) is never used in applications. This is false in general for the domain of computer science, and this will become even more apparent as CS matures and proofs become more widespread (as they already have).
Dear Sandro Magi,
DeleteI got a notification about your two new comments. Unfortunately, your again repeat the same claims you already made before.
The only new idea is that Arrow's theorem does not tell anything about democracy, no matter if there is a universally accepted definition of democracy, or not (as I believe). But Arrow's theorem is universally accepted as a result telling something important about democracy, although it seems that we still do not understand fully its significance. At the very least, thanks to Arrow's theorem we (you and me) now know that our understandings of democracy are different; and my position about definitions of democracy was directly influenced by thinking about Arrow's theorem.
But this blog is not about political science. I mentioned Arrow's theorem as an analogy to help understanding. This did not work for you. Too bad; may be it will help somebody else. So, I am not inclined to discuss the notion of democracy here.
Beyond this, there is nothing new in your comments; by this reason I left them in the status "awaiting moderation". But I did not marked them as spam (despite they are, by my definition of spam, which is quite broad).
In the meantime, I learned a little bit about Curry-Howard correspondence. It turned out to be closely related to the things I was (or I am) interested in the mathematical logic. I wonder why you used so sophisticated result, infinitely far from the interests of a typical mathematician. You could use much much simpler results (even at the elementary school level - or it is now at middle or even high school level in the West?) to convey the essence of your argument.
Anyhow, the Curry-Howard correspondence does not refute my claim.
Dear Sandro Magi,
DeleteI feel that enough time had passed without your replies. So, as promised, I published your comments. I regret that you are unwilling to carry on a serious discussion.
I do not accept your comment above as a sincere explanation of your understanding of my comment.
I made a misprint, sorry. In my phrase “I will not contest a claim that they are not verified” the “not” is missing; but this is clear from the context and I do not think that this misprint is the reason of your lack of understanding, or, more probably, your unwillingness to argue with what you understood. For the convenience, I will repeat my point in the next comment.
All computer users constantly use unverified software, and use it successfully. The verification of some hardware will not add anything to its usefulness.
DeleteIt will not even guarantee it from mistakes, because it interacts with a lot of unverified and not quite correct other software and it works not in ideal theoretical situation, but on physical hardware, which may fail by many reasons, some of which are completely beyond our control, like the radiation coming from the ground or the outer space.
Therefore, for real life use (i.e. for applications), we do not need a verification of software. We need it to be tested, that’s all.
If somebody wishes to verify some software, she is free to do this. He can also play chess; fly to Florida – whatever suits the tastes of this somebody. All this would be equally useless for the software users. I am aware that some government agencies in some countries require (or at least required in the past) the verification of some software. Well, this is a hardly unique example of the waste of the taxpayers money.
Now, the promised reply to the rest of your comment.
DeleteFirst, a counterexample does not refute “the big picture” even in mathematics. It may refute a precisely stated theorem. But more often than not a counterexample leads to refinement of “the big picture”. May be some exceptional cases were missed, or one needed to use different definitions.
If you claim that something nearly obvious is wrong by referring to highly technical sources, the burden of proof is on you. You need to explain what you refer too. Instead of this, you first gave references to some papers in CS without any reason to expect that I am able to read them; I am a mathematician, not a computer scientist, after all. Now you suggest me even more: to consult real life computer scientists. What makes you think that I really will do something like this? I would be very happy to learn something from comments to my notes in this blog, or any other. Actually, I did learn a lot. But if you are unable to provide an explanation, I have absolutely no reason to search it in other place.
If you consider the book by Moggi and Sorenson to be so great, why you did not referred me directly to it? Why did you use Wikipedia as an intermediary? Wiki lists a lot of sources, how could I know that this is the one you like most?
Well, I did sort out that Moggi-Sorenson is considered to be the best book about the Curry-Howard correspondence. But it includes a lot of other material, and to fully understand and appreciate it one needs to read the whole book (according to various reviews). Is it reasonable to expect that somebody will study 400 pages advanced book for the sake of a reply to a comment in a blog? My usual limit in such a situation is 12 pages of fiction literature.
In fact, I am an exception among mathematicians: I always was (and am) interested in mathematical logic and related areas and did studies it to some extent. But I never worked in this field. So, Moggi-Sorenson looks like an interesting book for me. Unfortunately, I hardly have time to study it now. May be some time in the future. May be it even will be decently priced by then. While searching for some less demanding sources, I found something interesting about other things. But I would find them faster if I would start with Moggi-Sorenson instead of Wiki.
This is the main reason why no applications of any science needs proofs. What would be the purpose of proving that some software is correct if the hardware may fail? Or just some 0 on a hard drive may turn into 1 as a result of being hit by some radiation coming from the deep space?
ReplyDeleteOh, and just FYI, mathematics has been applied to prevent random hardware faults as well, cf. Static Typing for a Faulty Lambda Calculus. Proofs are absolutely needed to ensure the desired fault tolerant properties are achieved. All of computer science is a counterexample to your assertion that proofs are not needed in applications. Hell, the web itself is a large distributed system and web programs, including blogspot.de hosting your blog, are designed to circumvent limitations implied by the CAP theorem.
Since mathematics exists anyhow, why not to apply it? This does not mean that all aspects of mathematics are needed for applications. If you made a good guess (and the discovery of any mathematical proof includes many guesses, from a key insight to minor guesses how to prove some lemmas), you better try it, and if it works, use it. I doubt that any innovative software design waited for a proof of correctness in order to begin its implementation, marketing, and selling (a product with a lot of bugs). Good mathematical insights may wait hundred of years and development of completely new theories (for completely unrelated purposes) before they are proved.
DeleteIf the CAP theorem imposes some limitation and it is indeed a theorem (it looks like it is), then imposed by it limitations cannot be circumvented. If WWW circumvents these limitations, then the assumptions of the theorem do not apply to WWW. Therefore, the proof of the CAP theorem is irrelevant. But the insight in the nature of limitations may be important and may stimulate real work in an important direction.
Perhaps, you know about very famous Arrow’s theorem. He got Nobel Prize in economics, probably, mostly for this theorem (it is not a real Nobel Prize, but the money are the same). The proof is very easy. But the insight is deep. Following your approach, one can interpret Arrow’s theorem as claiming that a reasonable democracy is impossible. This is usually done in order to attract attention. The conditions of being “reasonable” are fairly weak and desirable. For example, if a majority likes A more than B, then B should not be elected, or the proposal B should not pass. In fact, a little bit more is required. Anyhow, Arrow’s theorem is true and its proof is correct. A "true" democracy is not possible. Still, democracy as a way to make decisions affecting many people existed already more than two thousand years ago; and today citizens of many countries consider the order of things in their countries, villages, workplaces as democratic.
This does not mean that all aspects of mathematics are needed for applications.
DeleteI never claimed it was. Proofs are inherent to programming though. Your universally qualified statement that proofs are NEVER needed is false. There are also plenty of analyses performed in other engineering disciplines that amount to proofs, ie. that bridges will not collapse, etc. they just don't use the same syntactic formalisms as academia.
If the CAP theorem imposes some limitation and it is indeed a theorem (it looks like it is), then imposed by it limitations cannot be circumvented.
Circumvented was a poor word choice. "Addressed" is a better choice. Most deployed systems give up strong consistency and provide only eventual consistency.
Following your approach, one can interpret Arrow’s theorem as claiming that a reasonable democracy is impossible. [...] Anyhow, Arrow’s theorem is true and its proof is correct. A "true" democracy is not possible.
Incorrect. If we define democracy as a system comprised of agents wherein each agent has equal say in how the system operates, then we need only give up the minimum number of assumptions from Arrow's axiomatic basis that are not implied by the above definition. Arrow's theorem states:
1. If every voter prefers alternative X over alternative Y, then the group prefers X over Y.
2. If every voter's preference between X and Y remains unchanged, then the group's preference between X and Y will also remain unchanged (even if voters' preferences between other pairs like X and Z, Y and Z, or Z and W change).
3. There is no "dictator": no single voter possesses the power to always determine the group's preference.
Only properties 1 and 3 are required by democracy, so while it may be surprising that the group's preference for X and Y changes even though they collectively don't change their opinion on X and Y specifically, it is not fundamentally inconsistent with the definition of democracy. Thus democracy is achievable, and we now know exactly how it was and can be achieved.
I did not made any claim of this sort. But you are not particularly interested in what I wrote, and how it should be understood. I do not write for software engineers. There are some prerequisites. I am willing to explain things, but only when asked.
DeleteYou shifted the blame from software to the environment above, now you are replacing "circumvented" by "addressed". If you (I mean the community) addressed the problem of constant failures of basic software in the way you indicated, you are misleading the public. People who write in EULA’s “we do not guarantee suitability of this software for any purpose” are at least honest.
You are trying to do a similar trick with Arrow's theorem. You introduce your own definition of democracy. If it would be based on a radically new approach to the political science, it would have a chance to be interesting. But you just drop one of Arrow's assumptions. Do you think that people are interested in Arrow's theorem (including the Bank of Sweden Nobel Committee) because he did not understand what a democracy is?
It would be better if the comment above would follow this one. But they are both replies and posted next to the comments I reply to.
I did not made any claim of this sort. But you are not particularly interested in what I wrote, and how it should be understood.
DeleteHow it should be understood should follow from what you wrote. This is what clarity means. Perhaps you don't think that your inferences from follow logically and unambiguously from your premises either?
If what you have said was misinterpreted, then you should correct your message, you shouldn't blame the reader. It's never the reader's fault, just as it's never the reviewer's fault if something is unclear in a published proof.
You are trying to do a similar trick with Arrow's theorem. You introduce your own definition of democracy.
That's a dictionary definition. Go ahead and find me a single definition of democracy that requires property #2 and I will concede this point. Just one.
What and how somebody writes depends on the intended audience. It is so in fiction, it is so in advanced mathematical texts. I don’t firsthand knowledge about CS, but from customer reviews on Amazon it seems that the same is true for CS.
DeleteI already told you that my intended audience does not include software engineers and experts in CS. I have no intention to write texts which are clear to them.
My “inferences” do follow from my starting points (they are not “premises”). But they follow in a sense very different from formal logic and, apparently, from the sense you are used to.
After I wrote that I am just a little bit more knowledgeable about hardware and software than an average user, you give me references to technical papers in CS. When I ask for something more accessible, you suggest me Wikipedia. If you believe that I don’t know about Wikipedia, why are you wasting your time arguing with an incompetent intellectually challenged guy who claims to be a mathematician? Who misinterpreted whom?
If a reader has no prerequisites for a text and is unable to understand it, then it is the reader fault. Even in the fiction literature, if a reader is unaware of the cultural context of a novel and misses its point by this reason, it is the reader fault. If you did not read the last three posts in April 2012 (the only two posts preceding them are a test and a remark in a long discussion at other place), it is your fault. These three posts outline the cultural context of this entire blog. Of course, they are not the whole context, but if you do not recognize the context, then you do not have prerequisites for the rest.
But I am willing to provide explanations, if I see that somebody really wants to understand what I wrote and it is possible to provide explanations in the blog format. If somebody needs to take several graduate courses in mathematics first, I cannot help. If somebody shows up here and starts with something like “This is blatantly false”, I am unwilling to help: I do not expect that such a person is interested in understanding.
I said to you that I am not going to google “Curry-Howard”. After this, you demand from me some definition of democracy satisfying your arbitrary requirement? There is no fixed and universally accepted definition of democracy, and by good reason: it is neither possible nor desirable to give such a definition. Some “definitions” appear to be incompatible. A definition about which I was told few months ago is truly amazing. But I will not tell you.
If you believe that I don’t know about Wikipedia, why are you wasting your time arguing with an incompetent intellectually challenged guy who claims to be a mathematician?
DeleteI don't claim to know anything about anyone I converse with online, but what I do know is that you explicitly said you weren't going to search for the term and find a reference for yourself. I charitably provided a reference that links to plenty of other good sources. If you don't care for Wikipedia, then consult its references which are all academic sources.
If a reader has no prerequisites for a text and is unable to understand it, then it is the reader fault. [...] If somebody shows up here and starts with something like “This is blatantly false”, I am unwilling to help: I do not expect that such a person is interested in understanding.
Regardless of context, that single claim in your article is blatantly false.
After this, you demand from me some definition of democracy satisfying your arbitrary requirement? There is no fixed and universally accepted definition of democracy, and by good reason: it is neither possible nor desirable to give such a definition. Some “definitions” appear to be incompatible.
I didn't demande anything, but let's suppose I accept your claim of no universally accepted definition of democracy. I'd wager good money that not a single one of these various definitions would require property #2 of Arrow's theorem. Therefore, Arrow's theorem has nothing interesting to say about democracy. Once again, if you find me even a single definition that requires all three properties of Arrow's theorem, I will concede this point.
Well, your first paragraph is already answered. Nothing prevented you from telling that, in your opinion (shared by many), one “couldn't ask for a better introduction” than Moggi-Sorenson.
DeleteIn the second one you repeated the same statement once more. No need to reply to it again.
Now, the third and the last one. I thought that it is absolutely clear that this blog is not about political science. I used some analogy; it does not work for you. Sorry. Despite this I still believe that it is a good analogy and may help others. Unfortunately, you missed its point (in the same manner as you missed the point about working but not verified software). Also, you think that “Arrow's theorem has nothing interesting to say about democracy”. You are entitled to your opinion, but a lot of people, including myself, think differently. There is nothing exotic in our position; in fact it is the default one. My analogy may be useful for the readers who think so, and hardly relevant for others.
Finally, about the Curry-Howard correspondence. The slogan “A program is a proof” is a good one to attract attention. But, as usual, the devil is in the details. In fact, not only details. The Curry-Howard correspondence transforms a program into a proof in a formal deductive system. Now, this notion of a “proof” and the notion of a proof used by mathematicians daily are related, but different. A (sufficiently strong) deductive system is a model of mathematical reasoning. Such a model can be studied by mathematical tools; this is what the mathematical logic is about. But mathematicians do not think, speak, or write in any formal language. They speak in the common language, taking special care of making as unambiguous statements and arguments as possible. A Curry-Howard translation of a program does not make the program an application of mathematics. It means that a program can be reinterpreted in terms of the mathematical logic.
DeleteAn application of mathematics takes some tools or theorems developed by mathematicians and uses them. There are zillions of such applications. My claim is that theorems and theories used do not need to be rigorously proved in order to be useful for applications. For example, Newton’s calculus was very useful for about two centuries before rigorous proofs of its main theorems we developed by the end of the 19th century. This fact is even reflected in the university curricula. Everybody who may need calculus (and millions who will never have a chance to use it) is taught a course with no more than remnants of some proofs. A full-fledged calculus with precise definitions and proofs is taught only to few, mostly to the mathematics and statistics majors.
Instead of trying to impress the public by the Curry-Howard correspondence, you could use much simpler examples. For examples any use of the standard algorithms of addition and multiplications are proofs. The computation of 12+12 is a proof of the theorem 12+12=24. Probably, thousands years ago it was a great and surprising theorem. This time is long gone; now 12+12=24 is triviality not belonging to mathematics. Even the chess is a part of mathematics, but for mathematicians it is a trivial and totally uninteresting part.
So, the moral is the following. Mathematics is a human activity and not a formal deductive system. Similarly, a proof is something accessible to humans (adequately educated) and not a formal deduction.
[Moving away from the lights and the party like under a spell of an internal force; speaking in a very low voice]:
ReplyDeleteBut there’s something interesting in Curry-Howard... It is completely irrelevant to the topic here, but there is one step deep inside of the proof... May be this step...