Friday, January 28, 2011

Intuition and Mathematics: A Defense

Back in my days at Oberlin College, I had a memorably good math professor who I consider to be one of my mentors. Her grasp of the subject and the toughness of her problem sets helped me learn math that I hadn't thought myself capable of learning at the time. There was one point she made, however, that in hindsight I can't help but adamantly disagree with.

She said that intuition was what messed mathematicians up. Her points leading up to the conclusion were valid; mathematics is a language—you have a set of symbols and a set of rules for manipulating those symbols; and to prove something you follow those rules precisely, moving the symbols into the configuration that you'd like. Intuition can fail; but math never lies.

This fact about math has given way to some amazing tools; not least the ability for a computer to prove new theorems by arbitrarily moving around symbols until they find something of note. Truth is, however, that, as far as I know, these automated theorem solvers have proven very little of value compared to the work of mathematicians; most applications of computers to theorem solving are programs that are custom written to deal with the details of making a particular point such as the proof of the Four Color Theorem. But I digress.

My professor was certainly right that intuition is flawed whereas following rules doesn't fail; but what she didn't acknowledge was that our intuition can always be verified by checking the logic of our hunch; we can make a guess as to what the nature of a proof might be; following which we write up a proof to see if it logically works. In fact, I think this process of guessing and verifying is essential to solving problems in mathematics.

This point can be best made by a certain concept known as computational complexity (if you're not a technical person, please don't panic! It'll be clear in a minute.) Computational complexity is a measure of how difficult it is for a computer to solve a problem; different problems are organized into different levels of computational complexity based on how hard they are to solve. I'd like to focus on two, known as P and NP. P stands for polynomial—the problem can be solved by an algorithm in a polynomial number of steps. NP stands for nondeterministic-polynomial—the problem can't necessarily be solved in a polynomial number of steps, but if you were to guess the answer to the problem, you could see whether your guess was right in a polynomial number of steps.

Now, to be fair, it might be the case that there is no problem in NP that is not in P (but it's highly suspected that this isn't the case.) Assuming that this isn't the case, however, then there are a whole slew of interesting problems that can't be easily solved in a reasonable amount of time, but could be proven were somebody to make a "lucky guess."

But how "lucky" is that guess? Our intuitions solve extremely complicated problems with a very high accuracy rate. They don't prove anything, but they make very good guesses with a very high chance of success in a very short amount of time. After all, our brains are some of the most complex computers imaginable, with complex networks made up of trillions of neurons and connections; we can recognize faces, navigate social situations with countless unwritten rules and outsmart a computer at just about anything that's not a game with transparent rules (and computers still can't play Go for shit.)

Our intuition allows us to stumble upon the answer to a problem with a very good chance of success. After that, we can use logic to see if we've done it right—as the P/NP distinction illustrates, verification isn't nearly as costly (in most cases) as solving.

[End of neat, pre-packaged entry; on to messy stuff]

(Warning: I am less sure about the following. Feel free to correct me on this; and as always, argue with me.) Another way of thinking about this is that with each level of computational complexity, the amount of solutions to the problem becomes increasingly non-linear. Logic can only go so fast (at a linear speed, I suspect) and can't keep up with the increasing complexity of problems. Our intuition has two particular strengths:

1) It runs in parallel, which allows us to detect patterns very fast. Our neurons don't even fire all that fast—their effectiveness is in that we can distinguish information using space rather than time. No computer touches us in that regard

2) It is built to approximate and work in probabilities rather than to think of things in absolutes, which is what my professor's argument encouraged mathematicians to do. Approximations don't always solve the problem, but in a lot of cases a small decrease in accuracy (say, a 10% chance you're wrong), can lead to being able to find an answer much more efficiently. So long as we can verify the answer in an efficient amount of time (that's what proofs are for), then mathematicians can have their cake and eat it too.