Tuesday, December 15, 2009

What the Indispensibility Argument Isn't

When I first heard of Quine's indispensibility argument (We are committed to the existence of abstact objects, since we must quantify over them in order to state our best physical theory), years ago, I misunderstood it.

I thought Quine was trying to draw an analogy between nominalists and, say, people who deny that there are planets, but will admit all the usual observations through telescopes. We find the planet denier's position implausible. We want to say "If there aren't planets, how come -as you admit- everything we see through telescopes behaves just as it would if there were planets? If there are planets, this explains the order and regularity of what we see through telescopes. But if there aren't planets, how come - out of all the mindbogglingly many possible patterns of optical illusions - we happen to have ones that are just like ones that could be produced by seeing persisting objects in space?"

But, even aside from being not what Quine meant (as I was soon told), this is really not a good argument, as I shall now argue for the benefit of anyone else who is tempted by it.

The explanatory inadequacy argument above, crucially turns on our having a notion of the observations we would have if there were planets vs. various other patterns of observation which would *not* be consistent with there being planets. The planet denier's position is unattractive, because on their theory it looks like a miracle that we happened to get a coherent pattern of optical illusions that could have been produced by normal vision of real objects.

But, in the mathematical case, there is no such contrast. There is no pattern in the behavior of physical objects which suggests the existence of numbers. For what would the contrast class to exhibiting this pattern be? It's not like we think: actually cannonballs accelerate towards the earth at 9.8 m/s^2, but if there weren't numbers, they would probably just fly around crazily. On the platonist's own view the objects (numbers and functions) in calculus don't come down and beat on cannonballs to make them behave in ways that are describable by short differential equations. Nor do they prevent cars from going two meters per second for two seconds, in a given direction, but only traveling 1 meter in total (cars don't need to be prevented from doing the metaphysically impossible). So it's not the case that he would expect different behavior if there weren't any numbers (the way we would expect different behavior of telescopes if there weren't any planets). Thus, there's no argument to be made against the nominalist, along the lines of "If there aren't numbers, how can you explain the fact that things happen to look just like they would if there were numbers?"

Instead, the point of the Indispensibility Argument (or the only plausible version of it) is that the Nominalist cannot even *state* his theory of the physical objects he accepts, and how they behave, without quantifying over abstract objects, and hence contradicting himself. To summarize: Quine isn't saying we need numbers to explain observed patterns in the behavior of physical objects. He's saying we need numbers to even state the relevant patterns in the behavior of physical objects.

"No Fact of the Matter" Paradox?

Here's a line of reasoning I just came up with, that seems paradoxical.

(1) Quine points out that there's a kind of Sorieties series of different theories posting "atoms", ranging from Democritus' theory where the whole point of something being an atom was that atoms are indivisible to the current theories on which atoms are in fact divisible. (Let's use "atom0" to express Democtitus' notion of atoms.)

(2) This suggests that when you are far enough away from having a correct overall theory some phenomemon, the truth value of your scientific words can be vague. For, if it is vauge whether someone intermediate scientist counted as meaning atom by "atom" rather than atom0 or some other notion, then it is vague whether their assertion "there are atoms" expressed a truth.

(3) Science progresses, and we clearly have more to learn about fundamental physics (e.g. how to reconcile QM and Relativity), so we are probably in the same boat with regard to some of our current theoretical terms, maybe "quark" or "superstring". Suppose (without loss of generality) this is true of "quark".

(4) If (3) is right, there's no fact of the matter about whether "there are quarks" (as said by me now) expresses a truth.

(5) But (assuming we can apply Tarksi's T schema to an ordinary looking case like this), "there are quarks" expresses a truth if and only if there are quarks.

(6) So, there's no fact of the matter about whether there are quarks. (!)

(Conclusion) Either there's no fact of the matter about whether there are quarks, or there's no fact of the matter about whether there are strings or etc. for some term with a similar role in phyiscs.

At least, if the conclusion is true, this would be very surprising since when someone says "there's no fact of the matter as to whether" X we usually take them to be suggesting that we dismiss the question, while, presumably, scientists studying whether there are quarks/strings is a paradigm of the kind of question we DO want to invest energy in discussing.

Monday, December 14, 2009

Does mathematics "need" new axioms?

Here's something I'd like to figure out. When philosophers ask "Does mathematics need new axioms?", what is the is the intended task, such that they are asking whether we would need new axioms to accomplish it?

Here are some possibilities:

-to know all mathematical truths (well, we can't do that, with or without new axioms)

-to formally capture all our intuitive judgements about mathematics (there are familiar putnam vs. penrose reasons for thinking we can't do that either)

-to formalize some particular body of generally accepted mathematical reasoning, where everyone agrees on what's a good argument, but this can't be captured by logic plus the axioms we currently accept, and having a formalization would be practically helpful.

-to be in a state of believing all propositions which we are justified in believing.

It seems to me that, there's a great danger of launching into the debate about whether "math needs new axioms", and taking a position based on whether e.g. you like or dislike set theory, without having any clear sense of what you are claiming that we do/don't need new axioms for. Hence, I'd like to get clearer on different senses the question can have, and which one(s) are at stake in typical philosophical discussion.

Wednesday, December 9, 2009

Bookclub: 'Compositionality, Understanding, and Proofs'

In the latest Mind, Peter Pagin argues that Dummett's proof theoretic semantics is incompatible with the compositionality - a popular view in philosophy of language.

Compositionalty is the view that the meaning of a sentence is completely determined by the meaning of its parts i.e. for every connective that might be used to build up a sentence, there's a composition function which takes the meanings of whatever components the connective is being applied to, to the meaning of the overall thing you get after applying the connective.

Proof theoretic semantics
is the idea that: a) understanding a sentence consists in an ability to recognize (canonical) proofs of that sentence, and b) the meaning of a sentence is "the property of being a proof of that sentence".

Odd as I feel defending Dummett, on any subject, I think Pagin is wrong to say these two things are incompatible.

What compositionality (as stated in e.g. the stanford encylopedia, and "informally" by Pagin himself) requires is that, for each connective phi, there be a function Cphi which takes the < property of being a proof of p, the property of being a proof of q, the property of being a proof of r > to <the property of being a proof of phi(p, q, r))<. But if you accept compositionality at all, this has to be the case, because the property of being a proof of phi(x) can only be different from that of being a proof of phi(y) if x and y are different, and hence the property of being a proof of x is different from the property of being a proof of y. I don't think Pagin would deny this.

The problem is that Pagin seems to think compositionality + proof theoretic semantics requires something more. He writes:

"The combination of proof-theoretic semantics with the requirement of recognizability of proofs comes into conflict with compositionality. For assume that we have a semantic function phi for a language L. A generalized composition function {rho} for phi must then meet two conditions: (i) it must be possible to know the meaning of any complex expression in L by knowing {rho}, the modes of composition and the meaning of simple expressions; and (ii) the condition of being a canonical proof must, for every provable sentence A, be met by some proof that is recognizable by any speaker who understands A."

Note the switch here from the idea that compositionality says there must BE a function, to the claim that it must be possible to learn the meaning of words by KNOWING this function together with various other facts.

Firstly, the very idea of "knowing rho" (where rho is a function) makes me feel itchy and confused. I understand what it is to know *that something is the case* e.g. that a function f takes a certain value on a certain input. And I (kindof) understand what it is to know a person (e.g. I don't know Bill Gates, but I do know my advisor W.G.). But what's the equivalent of being on a first name basis with an abstract mathematical object? Does knowing a function mean being able to compute it? Being able to give a definite description that refers to it? Being able to give two distinct definitions definitions and knowing that they pick out the same function.

My best guess at what Pagin intends here, is that 'knowing rho' = knowing some proposition of the form:", is the composition function for whatever language L is in question'.

But now, note that Pagin's claim doesn't follow at all from the idea of compositionality - that the meaning of a composite sentence completely supervenes on the meanings of the pieces it is composed out of. The claim that a function with a certain property *exists* does not entail that it is possible to *know* such a function exists, or that this function is computable, or that it is possible to know which program computes it! So, compositionality doesn't imply that its even possible to have such knowledge, much less that it's possible to use this knowledge to learn the meaning of various composite expressions.

This distinction is especially crucial to remember in the context of discussing Godel's Thereom. For, remember from the Putnam-Penrose debate that all our reasoning about mathematics might well *be* recursively axiomatizable, it's just that we couldn't use mathematical reasoning to come to *know* what this recursive axiomatization was.

And, alas, Godel is exactly where Pagin is headed. For, his argument turns out to be that, if you could know some concrete specification of the composition function rho, you could mill out a recursive specification of the class C of acceptable proofs in number theory, then you could use this to construct an acceptable proof of the con sentence for C, which is itself a statement in number theory, but (by Godel I) cannot be proved in C. Contradiction.

Pagin's conclusion is that compositionality and proof-theoretic semantics are incomptatible. But, if this argument works, all it really shows is proof-theoretic semantics requires that one could not come to *know* a recursive specification of the composition function phi.

At this point, Pagin might say that the whole point of compositionality is to explain how we can know the meaning of complex sentences, by knowing their parts, so that accepting this point would be bad news for the proof-theoretic semanticist. But note that, we obviously don't understand composite sentences by explicitly breaking them don into parts. So the fact that we could never realize that something was a concrete specification of the composition function for our language, doesn't prevent compositionality from helping explain our linguistic abilities.

Tuesday, December 8, 2009

Justification vs. Truth Puzzle

For the purposes of this post, I'm assuming something like the intuitive notion of justification makes sense.

Sometimes people say:

1. "You should believe what's true, and avoid believing what's false."

Other times they say:

2. "You should believe what's justified, and avoid believing what's unjustified."

But prima facie, these are incompatible demands, since there are many true propositions which I am not justified in believing, like statements of the form "Tommorrow's winning lottery number will be ....", and 1 seems to entail that I should believe these claims, while 2 seems to entails that I shouldn't.

Puzzle: Can these two claims be made compatible? What is the relationship between these them?

first pass- Maybe we want to widescope? e.g.
1 ='Should[(Ax) Believe(x) <--> Expresses-a-Truth(x)]
2 ='Should[(Ax) Believe(x) <--> You-are-justified-in-believing(x)]
Though this suggests the conclusion that you should bring it about (by some kind of superhuman feat of evidence gathering?) that you are justified in believing every truth. Which is, maybe, odd.

Monday, December 7, 2009

Anscombe + Descartes

If Descartes' argument for dualism really is (as Anscombe seems to be suggesting in her essay on "The First Person"):

I know there's a thinking thing (namely myself).
I don't know whether there are any bodies.
Therefore: there's a thinking thing which is not a body.

then it seems to me, his argument is immediately fallacious. I mean, it's exactly like someone saying, after shaking a box and hearing a rattle:

I know there's a thing-inside-this-box (namely the one I heard rattle).
I don't know whether there are any marbles-inside-this-box.
Therefore: there's a thing in this box that isn't a marble.

Given that objects can have multiple different properties A and B, it's obviously possible to know that there's something with property A while being ignorant as to whether there's anything that has property B!

So there's no need for Anscombe to go to the lengths of denying that "I" refers to escape the force of such a weak argument as this, it seems to me.

Kant Puzzle

Based on my Kant 101 level knowledge of the subject, it's tempting to think:
- Kant's problem of accounting for synthetic knowledge is to explain how we are able to make certain (non-analytic) judgments in advance of experience, which experience then bears out.
- Kant's answer is that our minds organize experience in such a way that whatever input comes in from the noumena we will always represent a scenario in which these propositions hold true. So, for example, I can know in advance of experience that there are no round squares because my mind organizes experience in such a way that it couldn't represent a round square.

BUT if this understanding is right, then Kant's answer doesn't seem to do a very good job of addressing the problem he poses. For, the mere fact that my mind couldn't represent a scenario in which ~P does nothing to ensure (or even make it likely in any obvious way) that I would *realize* that I couldn't have an experience as if of P.

I mean, think about what we find attractive. It might be a psychological fact about me that no physically possible configuration of matter would strike me as constituting a person who is both tall and attractive. The algorithms that produce my feelings of attraction and the ones that detect tallness might be such that no possible sensory input could set off both. But none of this entail that I *know* that I am incapable of finding tall people attractive. Perhaps all I know, at any given time, is that I have not seen or imagined an attractive tall person *yet*.

Thus even if Kant's claims that mathematics, principles of causation and so forth are somehow an artifact of the way the human mind organizes experience were true, this (it seems) would not yet constitute an explanation of how we can manage to know about these subjects.

So the puzzle is: what should Kant's reply be and/or where is the interpretive failure in the argument above?