When I first heard of Quine's indispensibility argument (We are committed to the existence of abstact objects, since we must quantify over them in order to state our best physical theory), years ago, I misunderstood it.
I thought Quine was trying to draw an analogy between nominalists and, say, people who deny that there are planets, but will admit all the usual observations through telescopes. We find the planet denier's position implausible. We want to say "If there aren't planets, how come -as you admit- everything we see through telescopes behaves just as it would if there were planets? If there are planets, this explains the order and regularity of what we see through telescopes. But if there aren't planets, how come - out of all the mindbogglingly many possible patterns of optical illusions - we happen to have ones that are just like ones that could be produced by seeing persisting objects in space?"
But, even aside from being not what Quine meant (as I was soon told), this is really not a good argument, as I shall now argue for the benefit of anyone else who is tempted by it.
The explanatory inadequacy argument above, crucially turns on our having a notion of the observations we would have if there were planets vs. various other patterns of observation which would *not* be consistent with there being planets. The planet denier's position is unattractive, because on their theory it looks like a miracle that we happened to get a coherent pattern of optical illusions that could have been produced by normal vision of real objects.
But, in the mathematical case, there is no such contrast. There is no pattern in the behavior of physical objects which suggests the existence of numbers. For what would the contrast class to exhibiting this pattern be? It's not like we think: actually cannonballs accelerate towards the earth at 9.8 m/s^2, but if there weren't numbers, they would probably just fly around crazily. On the platonist's own view the objects (numbers and functions) in calculus don't come down and beat on cannonballs to make them behave in ways that are describable by short differential equations. Nor do they prevent cars from going two meters per second for two seconds, in a given direction, but only traveling 1 meter in total (cars don't need to be prevented from doing the metaphysically impossible). So it's not the case that he would expect different behavior if there weren't any numbers (the way we would expect different behavior of telescopes if there weren't any planets). Thus, there's no argument to be made against the nominalist, along the lines of "If there aren't numbers, how can you explain the fact that things happen to look just like they would if there were numbers?"
Instead, the point of the Indispensibility Argument (or the only plausible version of it) is that the Nominalist cannot even *state* his theory of the physical objects he accepts, and how they behave, without quantifying over abstract objects, and hence contradicting himself. To summarize: Quine isn't saying we need numbers to explain observed patterns in the behavior of physical objects. He's saying we need numbers to even state the relevant patterns in the behavior of physical objects.
Tuesday, December 15, 2009
"No Fact of the Matter" Paradox?
Here's a line of reasoning I just came up with, that seems paradoxical.
(1) Quine points out that there's a kind of Sorieties series of different theories posting "atoms", ranging from Democritus' theory where the whole point of something being an atom was that atoms are indivisible to the current theories on which atoms are in fact divisible. (Let's use "atom0" to express Democtitus' notion of atoms.)
(2) This suggests that when you are far enough away from having a correct overall theory some phenomemon, the truth value of your scientific words can be vague. For, if it is vauge whether someone intermediate scientist counted as meaning atom by "atom" rather than atom0 or some other notion, then it is vague whether their assertion "there are atoms" expressed a truth.
(3) Science progresses, and we clearly have more to learn about fundamental physics (e.g. how to reconcile QM and Relativity), so we are probably in the same boat with regard to some of our current theoretical terms, maybe "quark" or "superstring". Suppose (without loss of generality) this is true of "quark".
(4) If (3) is right, there's no fact of the matter about whether "there are quarks" (as said by me now) expresses a truth.
(5) But (assuming we can apply Tarksi's T schema to an ordinary looking case like this), "there are quarks" expresses a truth if and only if there are quarks.
(6) So, there's no fact of the matter about whether there are quarks. (!)
(Conclusion) Either there's no fact of the matter about whether there are quarks, or there's no fact of the matter about whether there are strings or etc. for some term with a similar role in phyiscs.
At least, if the conclusion is true, this would be very surprising since when someone says "there's no fact of the matter as to whether" X we usually take them to be suggesting that we dismiss the question, while, presumably, scientists studying whether there are quarks/strings is a paradigm of the kind of question we DO want to invest energy in discussing.
(1) Quine points out that there's a kind of Sorieties series of different theories posting "atoms", ranging from Democritus' theory where the whole point of something being an atom was that atoms are indivisible to the current theories on which atoms are in fact divisible. (Let's use "atom0" to express Democtitus' notion of atoms.)
(2) This suggests that when you are far enough away from having a correct overall theory some phenomemon, the truth value of your scientific words can be vague. For, if it is vauge whether someone intermediate scientist counted as meaning atom by "atom" rather than atom0 or some other notion, then it is vague whether their assertion "there are atoms" expressed a truth.
(3) Science progresses, and we clearly have more to learn about fundamental physics (e.g. how to reconcile QM and Relativity), so we are probably in the same boat with regard to some of our current theoretical terms, maybe "quark" or "superstring". Suppose (without loss of generality) this is true of "quark".
(4) If (3) is right, there's no fact of the matter about whether "there are quarks" (as said by me now) expresses a truth.
(5) But (assuming we can apply Tarksi's T schema to an ordinary looking case like this), "there are quarks" expresses a truth if and only if there are quarks.
(6) So, there's no fact of the matter about whether there are quarks. (!)
(Conclusion) Either there's no fact of the matter about whether there are quarks, or there's no fact of the matter about whether there are strings or etc. for some term with a similar role in phyiscs.
At least, if the conclusion is true, this would be very surprising since when someone says "there's no fact of the matter as to whether" X we usually take them to be suggesting that we dismiss the question, while, presumably, scientists studying whether there are quarks/strings is a paradigm of the kind of question we DO want to invest energy in discussing.
Monday, December 14, 2009
Does mathematics "need" new axioms?
Here's something I'd like to figure out. When philosophers ask "Does mathematics need new axioms?", what is the is the intended task, such that they are asking whether we would need new axioms to accomplish it?
Here are some possibilities:
-to know all mathematical truths (well, we can't do that, with or without new axioms)
-to formally capture all our intuitive judgements about mathematics (there are familiar putnam vs. penrose reasons for thinking we can't do that either)
-to formalize some particular body of generally accepted mathematical reasoning, where everyone agrees on what's a good argument, but this can't be captured by logic plus the axioms we currently accept, and having a formalization would be practically helpful.
-to be in a state of believing all propositions which we are justified in believing.
It seems to me that, there's a great danger of launching into the debate about whether "math needs new axioms", and taking a position based on whether e.g. you like or dislike set theory, without having any clear sense of what you are claiming that we do/don't need new axioms for. Hence, I'd like to get clearer on different senses the question can have, and which one(s) are at stake in typical philosophical discussion.
Here are some possibilities:
-to know all mathematical truths (well, we can't do that, with or without new axioms)
-to formally capture all our intuitive judgements about mathematics (there are familiar putnam vs. penrose reasons for thinking we can't do that either)
-to formalize some particular body of generally accepted mathematical reasoning, where everyone agrees on what's a good argument, but this can't be captured by logic plus the axioms we currently accept, and having a formalization would be practically helpful.
-to be in a state of believing all propositions which we are justified in believing.
It seems to me that, there's a great danger of launching into the debate about whether "math needs new axioms", and taking a position based on whether e.g. you like or dislike set theory, without having any clear sense of what you are claiming that we do/don't need new axioms for. Hence, I'd like to get clearer on different senses the question can have, and which one(s) are at stake in typical philosophical discussion.
Wednesday, December 9, 2009
Bookclub: 'Compositionality, Understanding, and Proofs'
In the latest Mind, Peter Pagin argues that Dummett's proof theoretic semantics is incompatible with the compositionality - a popular view in philosophy of language.
Compositionalty is the view that the meaning of a sentence is completely determined by the meaning of its parts i.e. for every connective that might be used to build up a sentence, there's a composition function which takes the meanings of whatever components the connective is being applied to, to the meaning of the overall thing you get after applying the connective.
Proof theoretic semantics is the idea that: a) understanding a sentence consists in an ability to recognize (canonical) proofs of that sentence, and b) the meaning of a sentence is "the property of being a proof of that sentence".
Odd as I feel defending Dummett, on any subject, I think Pagin is wrong to say these two things are incompatible.
What compositionality (as stated in e.g. the stanford encylopedia, and "informally" by Pagin himself) requires is that, for each connective phi, there be a function Cphi which takes the < property of being a proof of p, the property of being a proof of q, the property of being a proof of r > to <the property of being a proof of phi(p, q, r))<. But if you accept compositionality at all, this has to be the case, because the property of being a proof of phi(x) can only be different from that of being a proof of phi(y) if x and y are different, and hence the property of being a proof of x is different from the property of being a proof of y. I don't think Pagin would deny this.
The problem is that Pagin seems to think compositionality + proof theoretic semantics requires something more. He writes:
"The combination of proof-theoretic semantics with the requirement of recognizability of proofs comes into conflict with compositionality. For assume that we have a semantic function phi for a language L. A generalized composition function {rho} for phi must then meet two conditions: (i) it must be possible to know the meaning of any complex expression in L by knowing {rho}, the modes of composition and the meaning of simple expressions; and (ii) the condition of being a canonical proof must, for every provable sentence A, be met by some proof that is recognizable by any speaker who understands A."
Note the switch here from the idea that compositionality says there must BE a function, to the claim that it must be possible to learn the meaning of words by KNOWING this function together with various other facts.
Firstly, the very idea of "knowing rho" (where rho is a function) makes me feel itchy and confused. I understand what it is to know *that something is the case* e.g. that a function f takes a certain value on a certain input. And I (kindof) understand what it is to know a person (e.g. I don't know Bill Gates, but I do know my advisor W.G.). But what's the equivalent of being on a first name basis with an abstract mathematical object? Does knowing a function mean being able to compute it? Being able to give a definite description that refers to it? Being able to give two distinct definitions definitions and knowing that they pick out the same function.
My best guess at what Pagin intends here, is that 'knowing rho' = knowing some proposition of the form:", is the composition function for whatever language L is in question'.
But now, note that Pagin's claim doesn't follow at all from the idea of compositionality - that the meaning of a composite sentence completely supervenes on the meanings of the pieces it is composed out of. The claim that a function with a certain property *exists* does not entail that it is possible to *know* such a function exists, or that this function is computable, or that it is possible to know which program computes it! So, compositionality doesn't imply that its even possible to have such knowledge, much less that it's possible to use this knowledge to learn the meaning of various composite expressions.
This distinction is especially crucial to remember in the context of discussing Godel's Thereom. For, remember from the Putnam-Penrose debate that all our reasoning about mathematics might well *be* recursively axiomatizable, it's just that we couldn't use mathematical reasoning to come to *know* what this recursive axiomatization was.
And, alas, Godel is exactly where Pagin is headed. For, his argument turns out to be that, if you could know some concrete specification of the composition function rho, you could mill out a recursive specification of the class C of acceptable proofs in number theory, then you could use this to construct an acceptable proof of the con sentence for C, which is itself a statement in number theory, but (by Godel I) cannot be proved in C. Contradiction.
Pagin's conclusion is that compositionality and proof-theoretic semantics are incomptatible. But, if this argument works, all it really shows is proof-theoretic semantics requires that one could not come to *know* a recursive specification of the composition function phi.
At this point, Pagin might say that the whole point of compositionality is to explain how we can know the meaning of complex sentences, by knowing their parts, so that accepting this point would be bad news for the proof-theoretic semanticist. But note that, we obviously don't understand composite sentences by explicitly breaking them don into parts. So the fact that we could never realize that something was a concrete specification of the composition function for our language, doesn't prevent compositionality from helping explain our linguistic abilities.
Compositionalty is the view that the meaning of a sentence is completely determined by the meaning of its parts i.e. for every connective that might be used to build up a sentence, there's a composition function which takes the meanings of whatever components the connective is being applied to, to the meaning of the overall thing you get after applying the connective.
Proof theoretic semantics is the idea that: a) understanding a sentence consists in an ability to recognize (canonical) proofs of that sentence, and b) the meaning of a sentence is "the property of being a proof of that sentence".
Odd as I feel defending Dummett, on any subject, I think Pagin is wrong to say these two things are incompatible.
What compositionality (as stated in e.g. the stanford encylopedia, and "informally" by Pagin himself) requires is that, for each connective phi, there be a function Cphi which takes the < property of being a proof of p, the property of being a proof of q, the property of being a proof of r > to <the property of being a proof of phi(p, q, r))<. But if you accept compositionality at all, this has to be the case, because the property of being a proof of phi(x) can only be different from that of being a proof of phi(y) if x and y are different, and hence the property of being a proof of x is different from the property of being a proof of y. I don't think Pagin would deny this.
The problem is that Pagin seems to think compositionality + proof theoretic semantics requires something more. He writes:
"The combination of proof-theoretic semantics with the requirement of recognizability of proofs comes into conflict with compositionality. For assume that we have a semantic function phi for a language L. A generalized composition function {rho} for phi must then meet two conditions: (i) it must be possible to know the meaning of any complex expression in L by knowing {rho}, the modes of composition and the meaning of simple expressions; and (ii) the condition of being a canonical proof must, for every provable sentence A, be met by some proof that is recognizable by any speaker who understands A."
Note the switch here from the idea that compositionality says there must BE a function, to the claim that it must be possible to learn the meaning of words by KNOWING this function together with various other facts.
Firstly, the very idea of "knowing rho" (where rho is a function) makes me feel itchy and confused. I understand what it is to know *that something is the case* e.g. that a function f takes a certain value on a certain input. And I (kindof) understand what it is to know a person (e.g. I don't know Bill Gates, but I do know my advisor W.G.). But what's the equivalent of being on a first name basis with an abstract mathematical object? Does knowing a function mean being able to compute it? Being able to give a definite description that refers to it? Being able to give two distinct definitions definitions and knowing that they pick out the same function.
My best guess at what Pagin intends here, is that 'knowing rho' = knowing some proposition of the form:"
But now, note that Pagin's claim doesn't follow at all from the idea of compositionality - that the meaning of a composite sentence completely supervenes on the meanings of the pieces it is composed out of. The claim that a function with a certain property *exists* does not entail that it is possible to *know* such a function exists, or that this function is computable, or that it is possible to know which program computes it! So, compositionality doesn't imply that its even possible to have such knowledge, much less that it's possible to use this knowledge to learn the meaning of various composite expressions.
This distinction is especially crucial to remember in the context of discussing Godel's Thereom. For, remember from the Putnam-Penrose debate that all our reasoning about mathematics might well *be* recursively axiomatizable, it's just that we couldn't use mathematical reasoning to come to *know* what this recursive axiomatization was.
And, alas, Godel is exactly where Pagin is headed. For, his argument turns out to be that, if you could know some concrete specification of the composition function rho, you could mill out a recursive specification of the class C of acceptable proofs in number theory, then you could use this to construct an acceptable proof of the con sentence for C, which is itself a statement in number theory, but (by Godel I) cannot be proved in C. Contradiction.
Pagin's conclusion is that compositionality and proof-theoretic semantics are incomptatible. But, if this argument works, all it really shows is proof-theoretic semantics requires that one could not come to *know* a recursive specification of the composition function phi.
At this point, Pagin might say that the whole point of compositionality is to explain how we can know the meaning of complex sentences, by knowing their parts, so that accepting this point would be bad news for the proof-theoretic semanticist. But note that, we obviously don't understand composite sentences by explicitly breaking them don into parts. So the fact that we could never realize that something was a concrete specification of the composition function for our language, doesn't prevent compositionality from helping explain our linguistic abilities.
Labels:
bookclub,
philosophy of language,
philosophy of math
Tuesday, December 8, 2009
Justification vs. Truth Puzzle
For the purposes of this post, I'm assuming something like the intuitive notion of justification makes sense.
Sometimes people say:
1. "You should believe what's true, and avoid believing what's false."
Other times they say:
2. "You should believe what's justified, and avoid believing what's unjustified."
But prima facie, these are incompatible demands, since there are many true propositions which I am not justified in believing, like statements of the form "Tommorrow's winning lottery number will be ....", and 1 seems to entail that I should believe these claims, while 2 seems to entails that I shouldn't.
Puzzle: Can these two claims be made compatible? What is the relationship between these them?
first pass- Maybe we want to widescope? e.g.
1 ='Should[(Ax) Believe(x) <--> Expresses-a-Truth(x)]
2 ='Should[(Ax) Believe(x) <--> You-are-justified-in-believing(x)]
Though this suggests the conclusion that you should bring it about (by some kind of superhuman feat of evidence gathering?) that you are justified in believing every truth. Which is, maybe, odd.
Sometimes people say:
1. "You should believe what's true, and avoid believing what's false."
Other times they say:
2. "You should believe what's justified, and avoid believing what's unjustified."
But prima facie, these are incompatible demands, since there are many true propositions which I am not justified in believing, like statements of the form "Tommorrow's winning lottery number will be ....", and 1 seems to entail that I should believe these claims, while 2 seems to entails that I shouldn't.
Puzzle: Can these two claims be made compatible? What is the relationship between these them?
first pass- Maybe we want to widescope? e.g.
1 ='Should[(Ax) Believe(x) <--> Expresses-a-Truth(x)]
2 ='Should[(Ax) Believe(x) <--> You-are-justified-in-believing(x)]
Though this suggests the conclusion that you should bring it about (by some kind of superhuman feat of evidence gathering?) that you are justified in believing every truth. Which is, maybe, odd.
Monday, December 7, 2009
Anscombe + Descartes
If Descartes' argument for dualism really is (as Anscombe seems to be suggesting in her essay on "The First Person"):
I know there's a thinking thing (namely myself).
I don't know whether there are any bodies.
Therefore: there's a thinking thing which is not a body.
then it seems to me, his argument is immediately fallacious. I mean, it's exactly like someone saying, after shaking a box and hearing a rattle:
I know there's a thing-inside-this-box (namely the one I heard rattle).
I don't know whether there are any marbles-inside-this-box.
Therefore: there's a thing in this box that isn't a marble.
Given that objects can have multiple different properties A and B, it's obviously possible to know that there's something with property A while being ignorant as to whether there's anything that has property B!
So there's no need for Anscombe to go to the lengths of denying that "I" refers to escape the force of such a weak argument as this, it seems to me.
I know there's a thinking thing (namely myself).
I don't know whether there are any bodies.
Therefore: there's a thinking thing which is not a body.
then it seems to me, his argument is immediately fallacious. I mean, it's exactly like someone saying, after shaking a box and hearing a rattle:
I know there's a thing-inside-this-box (namely the one I heard rattle).
I don't know whether there are any marbles-inside-this-box.
Therefore: there's a thing in this box that isn't a marble.
Given that objects can have multiple different properties A and B, it's obviously possible to know that there's something with property A while being ignorant as to whether there's anything that has property B!
So there's no need for Anscombe to go to the lengths of denying that "I" refers to escape the force of such a weak argument as this, it seems to me.
Kant Puzzle
Based on my Kant 101 level knowledge of the subject, it's tempting to think:
- Kant's problem of accounting for synthetic knowledge is to explain how we are able to make certain (non-analytic) judgments in advance of experience, which experience then bears out.
- Kant's answer is that our minds organize experience in such a way that whatever input comes in from the noumena we will always represent a scenario in which these propositions hold true. So, for example, I can know in advance of experience that there are no round squares because my mind organizes experience in such a way that it couldn't represent a round square.
BUT if this understanding is right, then Kant's answer doesn't seem to do a very good job of addressing the problem he poses. For, the mere fact that my mind couldn't represent a scenario in which ~P does nothing to ensure (or even make it likely in any obvious way) that I would *realize* that I couldn't have an experience as if of P.
I mean, think about what we find attractive. It might be a psychological fact about me that no physically possible configuration of matter would strike me as constituting a person who is both tall and attractive. The algorithms that produce my feelings of attraction and the ones that detect tallness might be such that no possible sensory input could set off both. But none of this entail that I *know* that I am incapable of finding tall people attractive. Perhaps all I know, at any given time, is that I have not seen or imagined an attractive tall person *yet*.
Thus even if Kant's claims that mathematics, principles of causation and so forth are somehow an artifact of the way the human mind organizes experience were true, this (it seems) would not yet constitute an explanation of how we can manage to know about these subjects.
So the puzzle is: what should Kant's reply be and/or where is the interpretive failure in the argument above?
- Kant's problem of accounting for synthetic knowledge is to explain how we are able to make certain (non-analytic) judgments in advance of experience, which experience then bears out.
- Kant's answer is that our minds organize experience in such a way that whatever input comes in from the noumena we will always represent a scenario in which these propositions hold true. So, for example, I can know in advance of experience that there are no round squares because my mind organizes experience in such a way that it couldn't represent a round square.
BUT if this understanding is right, then Kant's answer doesn't seem to do a very good job of addressing the problem he poses. For, the mere fact that my mind couldn't represent a scenario in which ~P does nothing to ensure (or even make it likely in any obvious way) that I would *realize* that I couldn't have an experience as if of P.
I mean, think about what we find attractive. It might be a psychological fact about me that no physically possible configuration of matter would strike me as constituting a person who is both tall and attractive. The algorithms that produce my feelings of attraction and the ones that detect tallness might be such that no possible sensory input could set off both. But none of this entail that I *know* that I am incapable of finding tall people attractive. Perhaps all I know, at any given time, is that I have not seen or imagined an attractive tall person *yet*.
Thus even if Kant's claims that mathematics, principles of causation and so forth are somehow an artifact of the way the human mind organizes experience were true, this (it seems) would not yet constitute an explanation of how we can manage to know about these subjects.
So the puzzle is: what should Kant's reply be and/or where is the interpretive failure in the argument above?
Thursday, November 26, 2009
Bookclub: Pedersen on Wright's Entitlement
This latest installment is about Nikloj Pedersen's recent Synthese article on Crispin Wright. Pedersen criticizes (correctly in my opinion) certain possible motivation for Wright's idea that we are entitled to assume certain "cornerstone" propositions (like 'I'm not a brain in a vat') just because assuming these is requisite for getting any substantative theory of a given area off the ground. (You can't just point out that accepting ~BIV leads you to have many and no fewer true beliefs than the skeptic if BIV is true, and no fewer true beliefs if BIV is false. For, avoiding false beliefs is presumably also epistemically important, and assuming ~BIV imposes a risk of having many more false beliefs)
Instead he proposes that such cornerstone assumptions have "teleological value" insofar as they are aimed at something of value (namely, true belief), whether or not they actually succeed in producing such true beliefs. But this seems to immediately generalize to all beliefs - not just cornerstone ones.
For, what beliefs aren't aimed at the truth? It's just as true of the person who assumes the existence of a massive conspiracy as of the person who assumes the existence of the external world that they aim at having many true beliefs. Indeed, many people would say that it's a necessary truth, part of what it means for something to be a belief, that in believing that P one is trying to believe the truth.
With the possible exception of cases like the millionaire who bribes you to believe some proposition, all beliefs would seem to aim at truth. Hence it seems that all beliefs inherit teleological justification in Pedersen's sense.
One might be able to make this into an interesting view - all beliefs (not just cornerstone ones) are warranted until one gets active reason to doubt them. Such a position is remeniscent of conservitivism and coherentism. But, from the article Pedersen shows no sign of intending to say that all beliefs are default justified.
Instead he proposes that such cornerstone assumptions have "teleological value" insofar as they are aimed at something of value (namely, true belief), whether or not they actually succeed in producing such true beliefs. But this seems to immediately generalize to all beliefs - not just cornerstone ones.
For, what beliefs aren't aimed at the truth? It's just as true of the person who assumes the existence of a massive conspiracy as of the person who assumes the existence of the external world that they aim at having many true beliefs. Indeed, many people would say that it's a necessary truth, part of what it means for something to be a belief, that in believing that P one is trying to believe the truth.
With the possible exception of cases like the millionaire who bribes you to believe some proposition, all beliefs would seem to aim at truth. Hence it seems that all beliefs inherit teleological justification in Pedersen's sense.
One might be able to make this into an interesting view - all beliefs (not just cornerstone ones) are warranted until one gets active reason to doubt them. Such a position is remeniscent of conservitivism and coherentism. But, from the article Pedersen shows no sign of intending to say that all beliefs are default justified.
Wednesday, November 25, 2009
Crispin Wright and Rule Following
In the paper on Rule Following here, Wright suggests that good reasoning proceeds in obedience to with concrete rules, rules which we can in principle give at least a rule-circular justification for. I claim that Wright's view is only tenable, IF humanlike reasoners count as `obeying' infinitely many rules in this sense.
For, suppose a good reasoner only obeys finitely many such inference rules. And suppose, as Wright wants to claim that reasoner can come (by reasoning) to provide a rule circular justification for each such rule, (i.e. show that applying this rule cannot lead from truth to falsehood). But then our reasoner can combine all these finitely many justifications, to arrive at the conclusion that anything arrived at by applying some combination of these rules must be true. Hence, he can derive that the combination of these rules doesn't allow one to prove "0=1".
But remember that the rules are supposed to be concretely described. So, our reasoner can syntactically characterize the system which combines all these rules (the one which, unbenounced to him capthers all his good reasoning), and state Con(some formal system which allows exactly the inferences allowed by these rules). But he knows the combination of rules is consistent, so he can derive the con sentence for this set of rules. But, by incompleteness II (on the assumption that good reasoner's reasoning extends Robinson's Q, so that the theorem applies) this is impossible.
Hence, if Wright's theory about obedience to rules is correct, any good reasoner who accepts principles of reasoning that include Q (like us) must be obeying infinitely many rules.
[This may be problematic if one wants the notion of obedience to a rule to have some kind of psychological reality]
[Note that this doesn't mean the good reasoner's behavior won't be describable in some more efficient way by some finite collection of rules, just that the reasoner doesn't have access to these rules, in Wright's sense of being able to prove that they are truth preserving]
For, suppose a good reasoner only obeys finitely many such inference rules. And suppose, as Wright wants to claim that reasoner can come (by reasoning) to provide a rule circular justification for each such rule, (i.e. show that applying this rule cannot lead from truth to falsehood). But then our reasoner can combine all these finitely many justifications, to arrive at the conclusion that anything arrived at by applying some combination of these rules must be true. Hence, he can derive that the combination of these rules doesn't allow one to prove "0=1".
But remember that the rules are supposed to be concretely described. So, our reasoner can syntactically characterize the system which combines all these rules (the one which, unbenounced to him capthers all his good reasoning), and state Con(some formal system which allows exactly the inferences allowed by these rules). But he knows the combination of rules is consistent, so he can derive the con sentence for this set of rules. But, by incompleteness II (on the assumption that good reasoner's reasoning extends Robinson's Q, so that the theorem applies) this is impossible.
Hence, if Wright's theory about obedience to rules is correct, any good reasoner who accepts principles of reasoning that include Q (like us) must be obeying infinitely many rules.
[This may be problematic if one wants the notion of obedience to a rule to have some kind of psychological reality]
[Note that this doesn't mean the good reasoner's behavior won't be describable in some more efficient way by some finite collection of rules, just that the reasoner doesn't have access to these rules, in Wright's sense of being able to prove that they are truth preserving]
Use, Meaning and Number Theory
I like to joke that all philosophical questions in their clearest and most beautiful form philosophy of math. But I think this is actually true, in the case of questions about how much our use of a word has to "determine" the meaning of that word.
Consider the relationship between our use of the language of number theory, and the meaning of claims in this language.
I think the following two claims are about as uncontroversial as anything gets in philosophy:
a) The collection of sentences we are disposed to assert about the number theory (for any reasonable sense of the word disposition) is recursively enumerable.
b) The collection of truths of number theory is not. In particular there's a fact of the matter about all claims of the form "Every number has X recursively checkable property" e.g. whether fictionalism or platonism is the correct view about how to understand talk of the numbers, mathematicians are surely wondering about something when they ask "Are there infinitely many twin primes" (maybe something about what would have to be of any objects that had the structures which we take the numbers to have).
But what emerges from these two claims is a nice, and perhaps suprizing, picture of the relationship between use and meaning.
I use the words "all the numbers" in a way that (is r.e. and hence by Godel's theorem) only allows me to derive certain statements about the numbers. We can picture my reasoning about the numbers as what could be gotten by deriving things from a certain limited collection of axioms.
BUT in listing these limited collection of statements, I count as talking about some collection of objects/structure that objects could have. And, there are necessary truths about what those objects are like/what anything that has that structure must be like, which are not among the claims my use allows me to derive.
[If you're daunted by the mathematical example, here's another one inspired (oddly) by Wittgenstien on phil math. You use the words "bauhaus style" and "ornate" in a certain way, mostly to describe particular objects. This gives your words a meaning (though perhaps there is some vaguness). They would apply to some objects but not to others. Hence the question "Can any thing be both bauhaus style and ornate?" is either true false or perhaps indeterminate, if e.g. objects could be ornate in a way that makes it vague whether they are in the bauhaus style or not. But your use (e.g. your ability to say, when presented w/ one particular thing whether it is bauhaus style/ornate) does include anything which allows you to arrive at one answer to the question or another.]
So, there's a nice clear sense in which it strongly appears that: even if use determines meaning, facts about the meaning of our sentences can go beyond what our use of the words contained in them allows us to derive.
And, any philosophy that wishes to deny this claim will have to do quite alot to make itself more plausible than a) and b) above!
Consider the relationship between our use of the language of number theory, and the meaning of claims in this language.
I think the following two claims are about as uncontroversial as anything gets in philosophy:
a) The collection of sentences we are disposed to assert about the number theory (for any reasonable sense of the word disposition) is recursively enumerable.
b) The collection of truths of number theory is not. In particular there's a fact of the matter about all claims of the form "Every number has X recursively checkable property" e.g. whether fictionalism or platonism is the correct view about how to understand talk of the numbers, mathematicians are surely wondering about something when they ask "Are there infinitely many twin primes" (maybe something about what would have to be of any objects that had the structures which we take the numbers to have).
But what emerges from these two claims is a nice, and perhaps suprizing, picture of the relationship between use and meaning.
I use the words "all the numbers" in a way that (is r.e. and hence by Godel's theorem) only allows me to derive certain statements about the numbers. We can picture my reasoning about the numbers as what could be gotten by deriving things from a certain limited collection of axioms.
BUT in listing these limited collection of statements, I count as talking about some collection of objects/structure that objects could have. And, there are necessary truths about what those objects are like/what anything that has that structure must be like, which are not among the claims my use allows me to derive.
[If you're daunted by the mathematical example, here's another one inspired (oddly) by Wittgenstien on phil math. You use the words "bauhaus style" and "ornate" in a certain way, mostly to describe particular objects. This gives your words a meaning (though perhaps there is some vaguness). They would apply to some objects but not to others. Hence the question "Can any thing be both bauhaus style and ornate?" is either true false or perhaps indeterminate, if e.g. objects could be ornate in a way that makes it vague whether they are in the bauhaus style or not. But your use (e.g. your ability to say, when presented w/ one particular thing whether it is bauhaus style/ornate) does include anything which allows you to arrive at one answer to the question or another.]
So, there's a nice clear sense in which it strongly appears that: even if use determines meaning, facts about the meaning of our sentences can go beyond what our use of the words contained in them allows us to derive.
And, any philosophy that wishes to deny this claim will have to do quite alot to make itself more plausible than a) and b) above!
Saturday, November 21, 2009
Plenetudinous Platonism, Boolos and Completeness
Plenetuindous Platonism tries to resolve worries about access to mathematical objects by saying that there are mathematical objects corresponding to every "coherent theory".
The standard objection to this, based on a point by Boolos, is that if 'coherent' means first-order consistent, then this has to be false because there are first order consistent theories which are jointly inconsistent- but if 'coherent' doesn't mean first-order consistent, the notion is obscure.
I used to think this objection was pretty decisive, but I don't any more.
For, contrast the following two claims:
- TRUE: all consistent first-order theories have models in the universe of sets (completeness theorem)
- FALSE:all consistent first-order theories are true (Boolos point)
Which of these is relevant to the plenetudinous platonist?
What the plenetudinous platonist needs to say is that whichever kind of first-order consistent things we said about math, we would have expressed truths. But remember that quantifier restriction is totally ubiquitous in math and life (if someone says all the beers are in the fridge they don't mean all the beers in the universe, and if some past mathematician says there's no square root of -2, they may be best understood as not quantifying over a domain that includes the complex numbers).
So, what the plenetudinous platonist requires is that that every first order consistent theory comes out true for some suitable restriction of the domain of quantification, and interpretation of the non-logical primitives. And this is something the reductive platonist must agree with, because of the completeness theorem! The only difference is that the reductive platonist thinks there are models of these theories built out of sets, whereas the plenetudinous platonist thinks there's a structure of fundamental mathematical objects corresponding to each such theory.
Thus, plentudinous platonism's ontological commitments can be stated pretty crisply, as in the bold section above. And there's nothing inconsistent or about these commitments, unless normal set theory is inconsistent as well!
The standard objection to this, based on a point by Boolos, is that if 'coherent' means first-order consistent, then this has to be false because there are first order consistent theories which are jointly inconsistent- but if 'coherent' doesn't mean first-order consistent, the notion is obscure.
I used to think this objection was pretty decisive, but I don't any more.
For, contrast the following two claims:
- TRUE: all consistent first-order theories have models in the universe of sets (completeness theorem)
- FALSE:all consistent first-order theories are true (Boolos point)
Which of these is relevant to the plenetudinous platonist?
What the plenetudinous platonist needs to say is that whichever kind of first-order consistent things we said about math, we would have expressed truths. But remember that quantifier restriction is totally ubiquitous in math and life (if someone says all the beers are in the fridge they don't mean all the beers in the universe, and if some past mathematician says there's no square root of -2, they may be best understood as not quantifying over a domain that includes the complex numbers).
So, what the plenetudinous platonist requires is that that every first order consistent theory comes out true for some suitable restriction of the domain of quantification, and interpretation of the non-logical primitives. And this is something the reductive platonist must agree with, because of the completeness theorem! The only difference is that the reductive platonist thinks there are models of these theories built out of sets, whereas the plenetudinous platonist thinks there's a structure of fundamental mathematical objects corresponding to each such theory.
Thus, plentudinous platonism's ontological commitments can be stated pretty crisply, as in the bold section above. And there's nothing inconsistent or about these commitments, unless normal set theory is inconsistent as well!
Rabbits
Causal contact with rabbits seems to be involved in almost exactly the same way in the following two statements:
RH "There's a rabbit"
MP "The mereiological complement of rabbithood is perforated here" (Or, for short: "The Rabcomp is perf")
I mean, light bouncing off rabbits and hitting our eyes would seem to be what causes (assent to) both sentences.
Thus: if we try to say that RH refers to rabbits because assertions of it are typically caused by rabbits, we would (it seems!) also get the false result that MP refers to rabbits.
[Thus causal contact doesn't seem to be what does the work in resolving Quinean reference indetermenacy - which makes things look hopeful for the view that reference in mathematics can be as determinate as reference anywhere else.]
RH "There's a rabbit"
MP "The mereiological complement of rabbithood is perforated here" (Or, for short: "The Rabcomp is perf")
I mean, light bouncing off rabbits and hitting our eyes would seem to be what causes (assent to) both sentences.
Thus: if we try to say that RH refers to rabbits because assertions of it are typically caused by rabbits, we would (it seems!) also get the false result that MP refers to rabbits.
[Thus causal contact doesn't seem to be what does the work in resolving Quinean reference indetermenacy - which makes things look hopeful for the view that reference in mathematics can be as determinate as reference anywhere else.]
Friday, November 20, 2009
Speed Up Theorem and External Evidence
It's been suggested (e.g. by Pen Maddy, Philip Kitcher, and possibly by my advisor PK) that we can get some 'external evidence' for the truth of mathematical statements which are independent of our axioms, by noticing that they allow us to prove things which we already know to be true (because we can prove them directly from our axioms) much more quickly.
However, Godel's Speed Up Theorem seems to show that ANY genuine strengthening of our axioms would have this property. I quote from a presentation by Peter Smith:
"If T is nice theory, and γ is some sentence such
that neither T
⊢ γ nor T ⊢ ¬Î³. Then the theory T + γ got
by adding γ as a new axiom exhibits ultra speed-up over T"
"Nice" here means all the hypotheses needed for Godel's theorem to apply to a theory, and "ultra speed up" means that for any recursive function, putatively limiting how much adding γ can speed up a proof, there's some sentence x whose proof gets sped up by more than f(x) when you add γ to your theory T.
Smith just points out that we shouldn't be surprised by historical examples of proofs using complex numbers of set theory to prove things about arithmetic.
But doesn't this theorem also raise serious problems for taking observed instances of speed up to be evidence for the truth of a potential new axiom γ ?
However, Godel's Speed Up Theorem seems to show that ANY genuine strengthening of our axioms would have this property. I quote from a presentation by Peter Smith:
"If T is nice theory, and γ is some sentence such
that neither T
⊢ γ nor T ⊢ ¬Î³. Then the theory T + γ got
by adding γ as a new axiom exhibits ultra speed-up over T"
"Nice" here means all the hypotheses needed for Godel's theorem to apply to a theory, and "ultra speed up" means that for any recursive function, putatively limiting how much adding γ can speed up a proof, there's some sentence x whose proof gets sped up by more than f(x) when you add γ to your theory T.
Smith just points out that we shouldn't be surprised by historical examples of proofs using complex numbers of set theory to prove things about arithmetic.
But doesn't this theorem also raise serious problems for taking observed instances of speed up to be evidence for the truth of a potential new axiom γ ?
More Davidson Obsession
In their book on Davidson, Lepore and Ludwig suggest that when davidson says an expression E is a semantic primitive if "the 'rules which give the meaning for the sentences in which it does not appear, do not suffice to determine the meaning of sentences in which it does appear'", he means that:"someone who knows [these rules for how to use all sentences not containing E] is not thereby in a position to understand" sentences containing E.
Intuitively, I presume the idea is supposed to be something like this: "big cat" is not a semantic primitive, since you could learn its use just by hearing expressions like "big dog" and "orange cat" but "cat" is a primitive, since you wouldn't be able to understand this expression without previous exposure to sentences containing it.
However, I think this definition turns out to be rather problematic.
Firstly, by 'rules' Lepore and Ludwig later clarify that they don't mean consciously posited rules which we might have "propositional knowledge" of. So they don't mean something like "i before e, except after c". Rather, the relevant rules are supposed to be tacit, or unconscious.
So it seems like we can restate the criterion by saying something like:
E is a semantic primitive iff merely learning how to use expressions that don't contain E doesn't put one in a position to understand the use of E.
But now here's the problem.
-If "being in a position to understand" the use of E means being able to logically derive facts about the use of E then all words are semantic primitives. There's nothing logically impossible about a language in which there happens to be a special exception where, where by combine "big" and "cat" this means hyena rather than big cat.
- On the other hand, if "being in a position to understand" the use of E means being likely to use E correctly, this is a fact about about the relationship between a language and varying aspects of human psychology.
Here's what I mean:
Model someone learning a language as having a prior probability distribution over all possible functions pairing up sentences of a language they know with propositions, and then reacting to experience by ruling out certain interpretation functions, when they fail to square with the observed behavior of people who speak the relevant language. On this model, theories like Chomskian linguistics amount to saying that babies assign 0 prior probability to certain regions of the space of possible languages.
We can imagine a contiuum of logically possible distributions of prior probability, ranging from the foolhardy tourist who assumes that everyone is speaking English until given strong behavioral evidence against to the poet who feels sure he knows that a "fat sound" is the very first time he hears fat applied to things other than physical objects, to the anxious nerd who asks for examples of "fat" vs. "thin" sounds, to they hyperparainoid person who worries about the possibility that the combination of "fat" and "cat" might fail to mean a cat that's fat, just as the combination of "toy" and "soldier" fails to mean a soldier that's a toy.
Presumably actual (sane) people won't differ too much in their linguistic priors. [Though I wouldn't be surprized if babies and adults differed radically in this regard.]
But notice that being a semantic primitive turns out to have nearly nothing to do with the role of a word in a language. Rather it has to do with our cautious or uncautious tendency to extend examples of verbal behavior in one way rather than another. For the foolhardy tourist no English words are semantically primitive (on hearing a single word he comes to understand everything in one swoop) whereas all expressions are semantically primitive for the hyperparanoid person. Two people could learn the same language, and a word would be a semantic primitive for one of them, but not for the other.
Thus, so far as I can tell, the notion of 'semantic primitive' is incorrectly, or inadequately, defined for Davidson's purposes.
There's no limit to how complex a language a finite creature could "learn" on the basis of even a single observation. Whatever pattern of brainstates and behaviors suffice for counting as understanding the language, we can imagine a creature could just start out with a disposition to immediately form those, if it ever hears the sound "red". The only real limit on complexity of languages has nothing to do with learning, but rather with the complexity of the kind of behavior which competence with a given language would require. Our finite brains need to be able to produce behavior that suffices for the attribution of understanding of the relevant language.
Thus, I think, the claim that all human learnable languages have to have only finitely many 'semantic primitives' adds nothing but giant heaps of philosophical confusion and tortured metaphor to the (comparatively) clear and obvious claim that there have to be relatively short programs capable of passing the Turing test.
Intuitively, I presume the idea is supposed to be something like this: "big cat" is not a semantic primitive, since you could learn its use just by hearing expressions like "big dog" and "orange cat" but "cat" is a primitive, since you wouldn't be able to understand this expression without previous exposure to sentences containing it.
However, I think this definition turns out to be rather problematic.
Firstly, by 'rules' Lepore and Ludwig later clarify that they don't mean consciously posited rules which we might have "propositional knowledge" of. So they don't mean something like "i before e, except after c". Rather, the relevant rules are supposed to be tacit, or unconscious.
So it seems like we can restate the criterion by saying something like:
E is a semantic primitive iff merely learning how to use expressions that don't contain E doesn't put one in a position to understand the use of E.
But now here's the problem.
-If "being in a position to understand" the use of E means being able to logically derive facts about the use of E then all words are semantic primitives. There's nothing logically impossible about a language in which there happens to be a special exception where, where by combine "big" and "cat" this means hyena rather than big cat.
- On the other hand, if "being in a position to understand" the use of E means being likely to use E correctly, this is a fact about about the relationship between a language and varying aspects of human psychology.
Here's what I mean:
Model someone learning a language as having a prior probability distribution over all possible functions pairing up sentences of a language they know with propositions, and then reacting to experience by ruling out certain interpretation functions, when they fail to square with the observed behavior of people who speak the relevant language. On this model, theories like Chomskian linguistics amount to saying that babies assign 0 prior probability to certain regions of the space of possible languages.
We can imagine a contiuum of logically possible distributions of prior probability, ranging from the foolhardy tourist who assumes that everyone is speaking English until given strong behavioral evidence against to the poet who feels sure he knows that a "fat sound" is the very first time he hears fat applied to things other than physical objects, to the anxious nerd who asks for examples of "fat" vs. "thin" sounds, to they hyperparainoid person who worries about the possibility that the combination of "fat" and "cat" might fail to mean a cat that's fat, just as the combination of "toy" and "soldier" fails to mean a soldier that's a toy.
Presumably actual (sane) people won't differ too much in their linguistic priors. [Though I wouldn't be surprized if babies and adults differed radically in this regard.]
But notice that being a semantic primitive turns out to have nearly nothing to do with the role of a word in a language. Rather it has to do with our cautious or uncautious tendency to extend examples of verbal behavior in one way rather than another. For the foolhardy tourist no English words are semantically primitive (on hearing a single word he comes to understand everything in one swoop) whereas all expressions are semantically primitive for the hyperparanoid person. Two people could learn the same language, and a word would be a semantic primitive for one of them, but not for the other.
Thus, so far as I can tell, the notion of 'semantic primitive' is incorrectly, or inadequately, defined for Davidson's purposes.
There's no limit to how complex a language a finite creature could "learn" on the basis of even a single observation. Whatever pattern of brainstates and behaviors suffice for counting as understanding the language, we can imagine a creature could just start out with a disposition to immediately form those, if it ever hears the sound "red". The only real limit on complexity of languages has nothing to do with learning, but rather with the complexity of the kind of behavior which competence with a given language would require. Our finite brains need to be able to produce behavior that suffices for the attribution of understanding of the relevant language.
Thus, I think, the claim that all human learnable languages have to have only finitely many 'semantic primitives' adds nothing but giant heaps of philosophical confusion and tortured metaphor to the (comparatively) clear and obvious claim that there have to be relatively short programs capable of passing the Turing test.
Wednesday, November 18, 2009
bookclub: Gareth Evans on Semantics + Tacit Knowledge I
I just discovered Gareth Evans has a neat article (probably a classic) about the very issues of what a semantic theory is supposed to do, which I've been worrying about recently. I found it so interesting that I'll probably write a few posts about different issues in this article.
The article starts out by paraphrasing Crispin Wright, to the following effect:
If philosophers are trying to state what it takes for sentences in English to be true, there's a very simple schema '"S" is true iff S' (this is called Tarski's T schema) which immediately gives correct truth conditions for all English sentences.
But obviously, when philosophers try to give semantic theories they aren't satisfied with just doing this. So what is the task of formal semantics about?
I think this is a great question. When I first read it I thought:
Perhaps what we want to do is notice systematic relationships between the truth conditions for different sentences in English e.g. whenever "it is raining" is true "it is not the case that it is raining" is false. If you want to make this sound fancy, you could call it noticing which syntactic patterns (e.g. sentence A being the result of sticking "it is not the case that" on to the front of sentence B) echo interesting semantic properties (e.g. sentence A having the opposite truth value from sentence B).
However, I would call this endeavor the study of logic, rather than semantics. So far we have logical theories that help us spot patterns in how words like "and" and "there is" (and perhaps "necessarily") effect the truth conditions for sentences they figure in. There may be similar patterns to notice for other words as well (e.g. color attributions - something can be both red and scarlet but not both red and green) and one could develop a logic for each of these.
We aren't saying what "and" means (presumably if we are in a position to even try to give a logic for English expressions we already know that "and" means and), rather we are discovering systematic patterns in the truth conditions for different sentences containing "and".
So, rule one other thing off the list.
Instead, Wright suggests (and Evans seems to allow) that semantics to go beyond trivially stating the truth conditions for English sentences by "figuring in an explanation or the speaker's capacity to understand new sentences". (I am quoting from Evans, but both deplore the vaguness of this statement).
This sounds initially plausible to me, but it raises a question:
Once we have noticed that attributions of meaning don't require anything deeper than the kinds of systematic patterns of interactions with the world displayed by wittgenstein's builders (maybe with some requirement that these interactions be produced by something that doesn't look like ned block's giant look-up table), the question of how human beings actually manage to produces such behavior seems to be a purely scientific question.
There are just neuroscientific facts, about a) how the relevant alterations of behavior (corresponding e.g. to learning the word "slab") are produced when a baby's brain is exposed to a suitable combination of sensory inputs and b) what algorithm most elegantly describes/models this process.
So, what's the deal with philosopher trying to do semantics? And what does it take for an algorithm to model a brain process better or worse? I'll try to get more clear on these questions, and what Evans would say about them, in the next post.
The article starts out by paraphrasing Crispin Wright, to the following effect:
If philosophers are trying to state what it takes for sentences in English to be true, there's a very simple schema '"S" is true iff S' (this is called Tarski's T schema) which immediately gives correct truth conditions for all English sentences.
But obviously, when philosophers try to give semantic theories they aren't satisfied with just doing this. So what is the task of formal semantics about?
I think this is a great question. When I first read it I thought:
Perhaps what we want to do is notice systematic relationships between the truth conditions for different sentences in English e.g. whenever "it is raining" is true "it is not the case that it is raining" is false. If you want to make this sound fancy, you could call it noticing which syntactic patterns (e.g. sentence A being the result of sticking "it is not the case that" on to the front of sentence B) echo interesting semantic properties (e.g. sentence A having the opposite truth value from sentence B).
However, I would call this endeavor the study of logic, rather than semantics. So far we have logical theories that help us spot patterns in how words like "and" and "there is" (and perhaps "necessarily") effect the truth conditions for sentences they figure in. There may be similar patterns to notice for other words as well (e.g. color attributions - something can be both red and scarlet but not both red and green) and one could develop a logic for each of these.
We aren't saying what "and" means (presumably if we are in a position to even try to give a logic for English expressions we already know that "and" means and), rather we are discovering systematic patterns in the truth conditions for different sentences containing "and".
So, rule one other thing off the list.
Instead, Wright suggests (and Evans seems to allow) that semantics to go beyond trivially stating the truth conditions for English sentences by "figuring in an explanation or the speaker's capacity to understand new sentences". (I am quoting from Evans, but both deplore the vaguness of this statement).
This sounds initially plausible to me, but it raises a question:
Once we have noticed that attributions of meaning don't require anything deeper than the kinds of systematic patterns of interactions with the world displayed by wittgenstein's builders (maybe with some requirement that these interactions be produced by something that doesn't look like ned block's giant look-up table), the question of how human beings actually manage to produces such behavior seems to be a purely scientific question.
There are just neuroscientific facts, about a) how the relevant alterations of behavior (corresponding e.g. to learning the word "slab") are produced when a baby's brain is exposed to a suitable combination of sensory inputs and b) what algorithm most elegantly describes/models this process.
So, what's the deal with philosopher trying to do semantics? And what does it take for an algorithm to model a brain process better or worse? I'll try to get more clear on these questions, and what Evans would say about them, in the next post.
Labels:
bookclub,
philosophy of language,
philosophy of mind
Saturday, November 14, 2009
Practical Helpfulness: Why Care?
Readers of the last two posts may well be wondering why I'm going on so much about the "practical helpfulness" of mathematics.
One thing is, I wish I had a better name for it than "practical helpfulness", so maybe someone will suggest one :).
More seriously, I think the fact that our mathematical methods are (in effect) constantly making predictions about themselves, and other kinds of a priori reasoning - not to mention combining with our methods of observation to yield predictions that observation alone would not have yielded (see the computer example) has two important consequences.
Firstly, it shows that our reasoning about math is NOT the kind of thing you are likely to get just by making a series of arbitrary stipulations and sticking to them. All our different kinds of a priori reasoning (methods for counting abstract objects, logical inference, arithmetic, intuitive principles of number theory, set theoretic reasoning that has consequences for number theory) fit together in an incredibly intricate way. Each method of reasoning has myriad opportunities to yield consequences that would lead us to form false expectations about the results of applying some other method. And yet, this almost never happens!
Thus, there's a question about how we could have managed to get methods of armchair reasoning that fit together so beautifully. Some would posit a benevolent god, designing our minds to reason only in ways that are truth-preserving and hence coherent in this sense. But I think a process of free creativity to come up with new methods of a priori reasoning, plus Quinean/Millian revision when these new elements did raise false expectations, can do the job. This brings us to the second point.
Secondly, if we think about all these intended internal and external applications as forming part of our conception of which mathematical objects we mean when we talk about e.g. the numbers, then Qunian/Millian revision when applications go wrong will amount to a kind of reliable feedback mechanism, maintaining and improving the fit between what we say about "the numbers" and what's actually true of those-mathematical objects-whose-structure-mirrors-the-modal-facts-about-how-many-objects-there-are-when-there-are-n-Fs-and-m-(distinct)-Gs etc.
One thing is, I wish I had a better name for it than "practical helpfulness", so maybe someone will suggest one :).
More seriously, I think the fact that our mathematical methods are (in effect) constantly making predictions about themselves, and other kinds of a priori reasoning - not to mention combining with our methods of observation to yield predictions that observation alone would not have yielded (see the computer example) has two important consequences.
Firstly, it shows that our reasoning about math is NOT the kind of thing you are likely to get just by making a series of arbitrary stipulations and sticking to them. All our different kinds of a priori reasoning (methods for counting abstract objects, logical inference, arithmetic, intuitive principles of number theory, set theoretic reasoning that has consequences for number theory) fit together in an incredibly intricate way. Each method of reasoning has myriad opportunities to yield consequences that would lead us to form false expectations about the results of applying some other method. And yet, this almost never happens!
Thus, there's a question about how we could have managed to get methods of armchair reasoning that fit together so beautifully. Some would posit a benevolent god, designing our minds to reason only in ways that are truth-preserving and hence coherent in this sense. But I think a process of free creativity to come up with new methods of a priori reasoning, plus Quinean/Millian revision when these new elements did raise false expectations, can do the job. This brings us to the second point.
Secondly, if we think about all these intended internal and external applications as forming part of our conception of which mathematical objects we mean when we talk about e.g. the numbers, then Qunian/Millian revision when applications go wrong will amount to a kind of reliable feedback mechanism, maintaining and improving the fit between what we say about "the numbers" and what's actually true of those-mathematical objects-whose-structure-mirrors-the-modal-facts-about-how-many-objects-there-are-when-there-are-n-Fs-and-m-(distinct)-Gs etc.
Examples
In my last post, I proposed that that our methods of reasoning about math are "practically helpful", in (at least) the sense that they act as reliable shortcuts. Mathematical reasoning leads us to form correct expectations about (and hence potentially act on) the results of various processes of observation and/or inference, without going through these processes.
Now I'm going to give some more interesting examples of (our methods of reasoning about) mathematics being practically helpful to us in this way.
The general structure in all these is the same: Composing a process of mathematical reasoning M with some other reasoning processes A yields a result that's (nearly always) the same one you'd get by going through a different process B.
Examples:
1. Observe computer (wiring looks solid, seems to be running program p etc.), derive that program it's running doesn't halt, expect it to still be running after first 1/2hour <--> observe computer after 1/2 hour
2. Observe cannonballs, form general belief about trajectory of ball launched at various angles, observe angle of launch, derive where trajectory lands <---> measure where this ball does land.
3. Prove a general statement, expect 177 not to be a counterexample <---> (directly) check whether 177 is a counterexample.
4. Conclude that some system formalizes valid reasoning about some math truths, expect that you aren't looking at an inscription of a proof of ``0=1'' in that system <---> check what you have to see if it's an inscription a proof in the system, if it ends in ``0=1''.
5. Count male rhymes in poem, count female rhymes, then add <---> Count total rhymes
[Special Case Study: Number Theory
If we focus on the case of reasoning about the numbers, we can see that there's a nice structure of mathematics creating correct expectations about mathematics which creates correct expectations about mathematics, which creates correct expectations about the word.
- general reasoning about the numbers: Ax Ay Az ((x+y)+z) = (x+(y+z))
- calculations of particular sums: 22+23=45
- assertions of modal intuition: whenever there are 2 apples and 2 oranges the must are 4 fruit
- counting procedures: there are two ``e"s in ``there''
Note that each of the above procedures allows us to correctly anticipate certain results of applying the procedure below it. ]
Now I'm going to give some more interesting examples of (our methods of reasoning about) mathematics being practically helpful to us in this way.
The general structure in all these is the same: Composing a process of mathematical reasoning M with some other reasoning processes A yields a result that's (nearly always) the same one you'd get by going through a different process B.
Examples:
1. Observe computer (wiring looks solid, seems to be running program p etc.), derive that program it's running doesn't halt, expect it to still be running after first 1/2hour <--> observe computer after 1/2 hour
2. Observe cannonballs, form general belief about trajectory of ball launched at various angles, observe angle of launch, derive where trajectory lands <---> measure where this ball does land.
3. Prove a general statement, expect 177 not to be a counterexample <---> (directly) check whether 177 is a counterexample.
4. Conclude that some system formalizes valid reasoning about some math truths, expect that you aren't looking at an inscription of a proof of ``0=1'' in that system <---> check what you have to see if it's an inscription a proof in the system, if it ends in ``0=1''.
5. Count male rhymes in poem, count female rhymes, then add <---> Count total rhymes
[Special Case Study: Number Theory
If we focus on the case of reasoning about the numbers, we can see that there's a nice structure of mathematics creating correct expectations about mathematics which creates correct expectations about mathematics, which creates correct expectations about the word.
- general reasoning about the numbers: Ax Ay Az ((x+y)+z) = (x+(y+z))
- calculations of particular sums: 22+23=45
- assertions of modal intuition: whenever there are 2 apples and 2 oranges the must are 4 fruit
- counting procedures: there are two ``e"s in ``there''
Note that each of the above procedures allows us to correctly anticipate certain results of applying the procedure below it. ]
How (Our Beliefs About) Math are Practically Helpful
All philosophers of math will agree that people do something they call "math", and that this activity is practically helpful, in a certain sense. This is often put pretty loosely by saying `Math helps us build bridges that stand up'. But I think we can say something much clearer than that. Here goes:
Our grasp of math (such as it is) has at least three aspects:
- We can follow proofs. You will accept certain kinds of transitions from one mathematical sentence to another, (or between mathematical sentences and non-mathematical ones) when these are suggested to you.
- We can come up with proofs. You have a certain probability of coming up with chains of inference like this on your own.
- Proofs can create expectations in us. Accepting certain sentences makes you disposed to react with surprise and dismay should you come to accept other sentences. e.g. if you accept "n is prime" you will react with surprise and dismay to a situation where you are also inclined to accept "n has p, q, and r as factors".
Now, the sense in which our mathematical practices are helpful is this:
First, our reasoning about math fits into our overall web of beliefs in such a way as to create additional expectations. Here's what I have in mind: Fix a situation. People in that situation who realize their dispositions to make/accept mathematical inferences arrive in a state where they will be surprised by more things than those in the same situation who don't.
For example, plonk a bunch of people down in front of a bowl of red and yellow lentils. Make each person count the red lentils and the yellow lentils. Now give them some tasty sandwiches and half an hour. Some of the people will add the two numbers. Others will just eat their sandwitches. Now, note that the people ho did the math have formed extra expectations, in the following sense. If we now have our subjects count the lentils all together, the people who did the sum will be surprised if they get anything but one particular number, whereas those who didn't do the math will only be surprised if they get anything outside of a certain given range.
Secondly, the extra expectations raised by doing math are very very often correct. When doing mathematical reasoning about your situation puts you in a state where (now) you'd be surprised if a certain observation/reasoning yields anything but P, applying this process tends to yield P. (This is especially true if we weight the satisfaction/dissatisfaction of strong expectations more heavily). Thus, composing a process of mathematical reasoning M with some other reasoning processes A yields nearly always yields correct expectations about the result of going through a different process B, if it yields any expectations at all.
And finally, this is (potentially) helpful, because it means not only do we acquire the disposition to be surprised if B yields something different, but any further inferences/actions which would get triggered by doing B happen immediately after doing A and M without having to wait for B to take place. For example, in the case from the previous post: if we imagine that all of our sample population have inductively associated counting 1567 lentils in total with having enough to make soup, the people who did the addition after counting the lentils separately, start cooking earlier than those who did something else instead.
To summarize:
Doing math is practically helpful in the sense that spending time doing math raises extra expectations (relative to spending that time eating sandwiches) about the results of certain other processes, and these expectations are generally correct. Thus, mathematical reasoning constitutes a reliable shortcut, leading us to take whatever actions would be triggered by going through some other process B without actually going through B.
NOTE: I don't mean to suggest that this is all there is to math, or that math is somehow *merely* instrumental. I'm just trying to concretely state some data about the successful "applications" of math, which I think everyone will agree to.
Our grasp of math (such as it is) has at least three aspects:
- We can follow proofs. You will accept certain kinds of transitions from one mathematical sentence to another, (or between mathematical sentences and non-mathematical ones) when these are suggested to you.
- We can come up with proofs. You have a certain probability of coming up with chains of inference like this on your own.
- Proofs can create expectations in us. Accepting certain sentences makes you disposed to react with surprise and dismay should you come to accept other sentences. e.g. if you accept "n is prime" you will react with surprise and dismay to a situation where you are also inclined to accept "n has p, q, and r as factors".
Now, the sense in which our mathematical practices are helpful is this:
First, our reasoning about math fits into our overall web of beliefs in such a way as to create additional expectations. Here's what I have in mind: Fix a situation. People in that situation who realize their dispositions to make/accept mathematical inferences arrive in a state where they will be surprised by more things than those in the same situation who don't.
For example, plonk a bunch of people down in front of a bowl of red and yellow lentils. Make each person count the red lentils and the yellow lentils. Now give them some tasty sandwiches and half an hour. Some of the people will add the two numbers. Others will just eat their sandwitches. Now, note that the people ho did the math have formed extra expectations, in the following sense. If we now have our subjects count the lentils all together, the people who did the sum will be surprised if they get anything but one particular number, whereas those who didn't do the math will only be surprised if they get anything outside of a certain given range.
Secondly, the extra expectations raised by doing math are very very often correct. When doing mathematical reasoning about your situation puts you in a state where (now) you'd be surprised if a certain observation/reasoning yields anything but P, applying this process tends to yield P. (This is especially true if we weight the satisfaction/dissatisfaction of strong expectations more heavily). Thus, composing a process of mathematical reasoning M with some other reasoning processes A yields nearly always yields correct expectations about the result of going through a different process B, if it yields any expectations at all.
And finally, this is (potentially) helpful, because it means not only do we acquire the disposition to be surprised if B yields something different, but any further inferences/actions which would get triggered by doing B happen immediately after doing A and M without having to wait for B to take place. For example, in the case from the previous post: if we imagine that all of our sample population have inductively associated counting 1567 lentils in total with having enough to make soup, the people who did the addition after counting the lentils separately, start cooking earlier than those who did something else instead.
To summarize:
Doing math is practically helpful in the sense that spending time doing math raises extra expectations (relative to spending that time eating sandwiches) about the results of certain other processes, and these expectations are generally correct. Thus, mathematical reasoning constitutes a reliable shortcut, leading us to take whatever actions would be triggered by going through some other process B without actually going through B.
NOTE: I don't mean to suggest that this is all there is to math, or that math is somehow *merely* instrumental. I'm just trying to concretely state some data about the successful "applications" of math, which I think everyone will agree to.
Analyticity: No Free Lunch
Consider the following pardigmatic examples of analytic and synthetic sentences:
(1) "Dogs once existed."
(2) "Prime numbers have two distinct divisors: themselves and one."
Both of these statements feel extremely obvious to us. And, if anything we're more likely to stop asserting (2) than (1) - if some perverse person wants to count 1 as a "prime" number, that's fine with me, so (if he's insistent enough) I'll adopt his usage and hence stop saying sentence 2 (and e.g. change how I state the fundamental theorem of algebra accordingly). So - we wonder, after reading Quine, what does the further claim that (2) is analytic amount to?
Here's an idea: If someone asked me to back up my assertion of (1), I'd be surprised, but there are things I would do to support this e.g. give an example of a dog. If (bizzarely) I couldn't state any other claims in support of (1), I'd be troubled. In contrast, if asked to justify (2) I wouldn't be able to give any kind of argument for it AND I wouldn't be troubled by this, or inclined to revise. (Note: this is exactly when claims about analyticity and meaning come up in ordinary contexts - people say 'that's just what I mean by the term' when faced with skepticism about certain things.)
S is basic analytic in P's idiolect iff: either P is happy to accept S without being able to provide any further justification
S is analytic in P's idiolect iff: S is basic analytic S is derivable via some combination of premises and inferences, each of which is basic analytic.
This seems to pick out a relatively sharp class of sentences, and accord with our intuitive judgments of analyticity (at least if we assume that experience can somehow be cited as a justification [or something more sophisticated], so that direct observations don't count as analytic for the observer).
Does this refute Quine? No. For, let's think about what epistemological siginificance (this notion) analyticity has. Do we have some kind of special access to analytic truths?
Making a bunch of new sentences analytic in your idiolect is just a matter of developing the inclined to say "that's just what I mean by the word" when pressed for a justification of these sentences. And this refusal to provide extra justification doesn't somehow ensure that the sentences you assert so boldly come to express truths.
For, what bucking up your insouciance like this does, is change the facts about your use of words so that (now), if the certain of your words are meaningful at all, these sentences will express truths. Thus, it makes these sentences/inferences function as a kind of implicit definition of your terms. But, as the famous case of Tonk shows, not all implicit definitions are coherent. Also, in changing the meanings of your words in this way, you run the risk of making other non-analytic sentences that you currently accept now express falsehoods.
Thus, saying that some sentence S is analytic isn't some kind of epistemic free pass for you to accept that sentence. All it does is semantically push all your chips into the center of the table with regard to S. Whereas before you ran the risk that S would express a falsehood, now there's a better chance that S will express a truth, but if it doesn't both S and a bunch of other sentences in your language will be totally meaningless.
So, here's my current position: the analytic-synthetic distinction is real, but it doesn't give the epistemological free lunch* which the logical positivists hoped it would.
*i.e. just saying that facts about something (like math) is analytic doesn't banish mysteries about how we came to know these facts.
(1) "Dogs once existed."
(2) "Prime numbers have two distinct divisors: themselves and one."
Both of these statements feel extremely obvious to us. And, if anything we're more likely to stop asserting (2) than (1) - if some perverse person wants to count 1 as a "prime" number, that's fine with me, so (if he's insistent enough) I'll adopt his usage and hence stop saying sentence 2 (and e.g. change how I state the fundamental theorem of algebra accordingly). So - we wonder, after reading Quine, what does the further claim that (2) is analytic amount to?
Here's an idea: If someone asked me to back up my assertion of (1), I'd be surprised, but there are things I would do to support this e.g. give an example of a dog. If (bizzarely) I couldn't state any other claims in support of (1), I'd be troubled. In contrast, if asked to justify (2) I wouldn't be able to give any kind of argument for it AND I wouldn't be troubled by this, or inclined to revise. (Note: this is exactly when claims about analyticity and meaning come up in ordinary contexts - people say 'that's just what I mean by the term' when faced with skepticism about certain things.)
S is basic analytic in P's idiolect iff: either P is happy to accept S without being able to provide any further justification
S is analytic in P's idiolect iff: S is basic analytic S is derivable via some combination of premises and inferences, each of which is basic analytic.
This seems to pick out a relatively sharp class of sentences, and accord with our intuitive judgments of analyticity (at least if we assume that experience can somehow be cited as a justification [or something more sophisticated], so that direct observations don't count as analytic for the observer).
Does this refute Quine? No. For, let's think about what epistemological siginificance (this notion) analyticity has. Do we have some kind of special access to analytic truths?
Making a bunch of new sentences analytic in your idiolect is just a matter of developing the inclined to say "that's just what I mean by the word" when pressed for a justification of these sentences. And this refusal to provide extra justification doesn't somehow ensure that the sentences you assert so boldly come to express truths.
For, what bucking up your insouciance like this does, is change the facts about your use of words so that (now), if the certain of your words are meaningful at all, these sentences will express truths. Thus, it makes these sentences/inferences function as a kind of implicit definition of your terms. But, as the famous case of Tonk shows, not all implicit definitions are coherent. Also, in changing the meanings of your words in this way, you run the risk of making other non-analytic sentences that you currently accept now express falsehoods.
Thus, saying that some sentence S is analytic isn't some kind of epistemic free pass for you to accept that sentence. All it does is semantically push all your chips into the center of the table with regard to S. Whereas before you ran the risk that S would express a falsehood, now there's a better chance that S will express a truth, but if it doesn't both S and a bunch of other sentences in your language will be totally meaningless.
So, here's my current position: the analytic-synthetic distinction is real, but it doesn't give the epistemological free lunch* which the logical positivists hoped it would.
*i.e. just saying that facts about something (like math) is analytic doesn't banish mysteries about how we came to know these facts.
Saturday, November 7, 2009
Davidson Obsession
Davidson proposes (in "Truth and Meaning") that for a language to be learnable by finite creatures like us there must be a finite collection of axioms which entails all true statements of the form '"Snow is white" is true if and only if snow is white'. Then he, and his followers, argue that people with various kinds of theories can't satisfy this constraint e.g. that nominalists can't get a theory that entails the right truth conditions for mathematical statements without using axioms that quantify over abstracta.
Something about this argument strikes me as fishy, and I've spent hours obsessing over it at various times, replacing one putative "refutation" with another. :( But I can't stop thinking about it, so here's my newest attempt.
First, grant that for someone to count as understanding some words they need to know all the relevant instances of Tarski's T schema. So they have to be disposed to assent to every such sentence. Now, as every sophomore seeing the Davidson for the first time points out, it's trivially easy to make a finite program that 'assents' to every query that's instance of the T schema in a given language, or enumerates all such instances. But Davidson requires more, there needs to be a finite collection of axioms, which logically entail all the instances. This is what gives Davidson's claim its potential bite. But now, we ask, why think this?
EITHER
Davidson thinks that to know the T schema you need to be able to consciously deduce them other things you antecedently know. In this case the requirement that each instance of the T schema must be deducible from a finite collection of axioms would be motivated.
But this can't be right because no one can consciously produce such an axiomatization for our language. If we learned the T schema by consciously deriving it from some axioms, we should be able to state the axioms. Therefore, conscious deduction does not happen, and cannot be required.
OR
Davidson allows that it suffices for each instance of the T schema to individually feel obvious to you, (and for you to be able to draw all the right logical consequences from it etc.)
But to explain the fact that each sentence of this form feels obvious when you contemplate it, we just need to imagine your brain is running the sophomore-objection program which checks every queried string for being an instance of the T schema and then causes you to find a queried sentence obvious if it is an instance. Once we are talking about subpersonal processes there is no reason to model them as making derivations in first order logic, so the requirement is unmotivated.
Perhaps Davidson might argue that the subpersonal processes doing the recognition are somehow doing something equivalent to quantifying over abstracta, so the nominalist, at least, would have a problem. But do subpersonal processes really count as quantifying over anything? And if they do, is there any reason we have to agree with their opinions about ontology?
Something about this argument strikes me as fishy, and I've spent hours obsessing over it at various times, replacing one putative "refutation" with another. :( But I can't stop thinking about it, so here's my newest attempt.
First, grant that for someone to count as understanding some words they need to know all the relevant instances of Tarski's T schema. So they have to be disposed to assent to every such sentence. Now, as every sophomore seeing the Davidson for the first time points out, it's trivially easy to make a finite program that 'assents' to every query that's instance of the T schema in a given language, or enumerates all such instances. But Davidson requires more, there needs to be a finite collection of axioms, which logically entail all the instances. This is what gives Davidson's claim its potential bite. But now, we ask, why think this?
EITHER
Davidson thinks that to know the T schema you need to be able to consciously deduce them other things you antecedently know. In this case the requirement that each instance of the T schema must be deducible from a finite collection of axioms would be motivated.
But this can't be right because no one can consciously produce such an axiomatization for our language. If we learned the T schema by consciously deriving it from some axioms, we should be able to state the axioms. Therefore, conscious deduction does not happen, and cannot be required.
OR
Davidson allows that it suffices for each instance of the T schema to individually feel obvious to you, (and for you to be able to draw all the right logical consequences from it etc.)
But to explain the fact that each sentence of this form feels obvious when you contemplate it, we just need to imagine your brain is running the sophomore-objection program which checks every queried string for being an instance of the T schema and then causes you to find a queried sentence obvious if it is an instance. Once we are talking about subpersonal processes there is no reason to model them as making derivations in first order logic, so the requirement is unmotivated.
Perhaps Davidson might argue that the subpersonal processes doing the recognition are somehow doing something equivalent to quantifying over abstracta, so the nominalist, at least, would have a problem. But do subpersonal processes really count as quantifying over anything? And if they do, is there any reason we have to agree with their opinions about ontology?
Produce the Code!
There's a three-way-debate going on between those who want to understand our ability to think in terms of manipulation of intrinsically meaningful items in the head (physical token sentences of a language of thought) vs. merely in terms of connections vs. behaviorists who think it doesn't matter how our brain produces suitable behavior.
Obviously, one would like to know a lot more about the neuroscience of language use. But, so far as I can tell, the philosophical aspects of this debate could be resolved right now, by producing toy blueprints/sample code. Then we look at the code, and consider thought experiments in which the brain actually turns out to work as indicated in the code...
Linguistic Behaviorism vs. Non-behaviorism:
If you think that stuff about how competent linguistic behavior is produced can be relevant to meaning, produce sample code A and B with the same behavioral outputs, such that we would intuitively judge a brain that worked in ways A vs. B would mean different things by the same words. [I think Ned Block has done this with his blockhead]
If you think stuff inside the head also establishes determinacy of reference, contra Quine, produce two pieces of sample code A and B for a program that e.g. outputs "Y"/"N" to the query "Gavagai?", such that we would intuitively say people whose brains worked like A meant rabbit and those that worked like B meant undetached rabbit part.
Language of Thought vs. Mere Conectionism:
If you are a LOT-er who thinks things the brain don't just co-vary with horses, but can actually mean `horse', produce sample code which generates verbal behavior, in response to sensory inputs, in such a way that we would intuitively judge pieces of the memory of a robot running that program to have meanings.
Then, produce sample code that works in a "merely conectionist way" and provide some argument that the brain is more likely to turn out to work in the former way.
[NOTE it does not suffice merely to give a program that derives truth conditions for sentences, unless you also want to posit a friendly homunculus who reads the sentences and works out what proper behavior would be. What your brain ultimately needs to do is produce the correct behavior! So, if you want to compare the efficiency of mere conectionist vs. LOT-like theories of how your brain does what it does, you need to write toy programs that evaluate evidence for snow being white, rocks being white, sand being white and respond appropriately- not just the trivial program that prints out an infinite list of sentences. "Snow is white" is true iff Snow is white. "Sand is white" is true iff sand is white... ]
Charitably, I think the LOT-ers want to say that the only feasible way of making something that passes the Turing test will be to use data structures of a certain kind. But until they can show some samples of what data structures would and wouldn't count, it's really hard to understand this claim. (I mean, the claim is that you will need data structures whose tokens count as being about something. But which are these?).
Obviously, one would like to know a lot more about the neuroscience of language use. But, so far as I can tell, the philosophical aspects of this debate could be resolved right now, by producing toy blueprints/sample code. Then we look at the code, and consider thought experiments in which the brain actually turns out to work as indicated in the code...
Linguistic Behaviorism vs. Non-behaviorism:
If you think that stuff about how competent linguistic behavior is produced can be relevant to meaning, produce sample code A and B with the same behavioral outputs, such that we would intuitively judge a brain that worked in ways A vs. B would mean different things by the same words. [I think Ned Block has done this with his blockhead]
If you think stuff inside the head also establishes determinacy of reference, contra Quine, produce two pieces of sample code A and B for a program that e.g. outputs "Y"/"N" to the query "Gavagai?", such that we would intuitively say people whose brains worked like A meant rabbit and those that worked like B meant undetached rabbit part.
Language of Thought vs. Mere Conectionism:
If you are a LOT-er who thinks things the brain don't just co-vary with horses, but can actually mean `horse', produce sample code which generates verbal behavior, in response to sensory inputs, in such a way that we would intuitively judge pieces of the memory of a robot running that program to have meanings.
Then, produce sample code that works in a "merely conectionist way" and provide some argument that the brain is more likely to turn out to work in the former way.
[NOTE it does not suffice merely to give a program that derives truth conditions for sentences, unless you also want to posit a friendly homunculus who reads the sentences and works out what proper behavior would be. What your brain ultimately needs to do is produce the correct behavior! So, if you want to compare the efficiency of mere conectionist vs. LOT-like theories of how your brain does what it does, you need to write toy programs that evaluate evidence for snow being white, rocks being white, sand being white and respond appropriately- not just the trivial program that prints out an infinite list of sentences. "Snow is white" is true iff Snow is white. "Sand is white" is true iff sand is white... ]
Charitably, I think the LOT-ers want to say that the only feasible way of making something that passes the Turing test will be to use data structures of a certain kind. But until they can show some samples of what data structures would and wouldn't count, it's really hard to understand this claim. (I mean, the claim is that you will need data structures whose tokens count as being about something. But which are these?).
The Problem of Logical Omniscience and Inferential Role
I just looked over a very old thing I wrote about the problem of logical omniscience. The problem of Logical Omniscience is: How can you count as believing one thing, while not believing (or even explicitly rejecting) something logically equivalent?
I suggested that propositions have certain preferred inferential roles, and that you count as believing that P to the extend that you are disposed to make enough of these preferred inferences, quickly and confidently enough.
So for example, someone can believe that a function is Turing computable but not that it's recursive, even though these two statements are provably equivalent, because they might be willing to make enough of the characteristic inferences associated with Turing computability, but not those for recursive-ness. (The characteristic inferences for "...is Turing computable" would be those that people call "immediate" from the definition of Turing computability, and ditto for the -different- characteristic inferences for recursive).
This is interesting because:
1. The characteristic inferences associated with a proposition/word will NOT supervene on the inferences which that proposition/word justifies. Since Turing computability and recursive-ness are probably equivalent, the very same inferences are JUSTIFIED for each one of them. But "This function is Turing computable" and "This function is recursive" need to have different characteristic inferences, to explain how you can know one but not the other.
2. Given (1), if you want to attach meanings to individual words, these meanings should not only include things like sense and reference which help build up the truth conditions for sentences involving that word, but also something like characteristic inferences, which helps you chose when to attribute someone a belief involving this word, rather than another which word would always contribute in exactly the same way to the truth conditions of any sentence.
2. It's commonly said that aliens would have the same math as us. If this means that they wouldn't disagree with us about math that sounds right. But if it means that they would (before contact with humans) believe literally the same propositions as we do, I don't think so.
For, think about all the many different notions we could define which would be equivalent to Turing computability, but have different characteristic inferences. If you buy the above, each of these notions corresponds to a slightly different thought. Thus for the aliens to believe the exact same mathematical claims as we do, they would have to have the same definitions/mathematical concepts. But it's much less clear whether aliens would have the same aesthetic sense guiding what definitions they made/mathematical concepts they came up with. For example, I'm much more convinced that aliens would accept topology than that they would have come up with it. I mean, just think about the different kinds of math developed just by humans in different eras and countries.
I suggested that propositions have certain preferred inferential roles, and that you count as believing that P to the extend that you are disposed to make enough of these preferred inferences, quickly and confidently enough.
So for example, someone can believe that a function is Turing computable but not that it's recursive, even though these two statements are provably equivalent, because they might be willing to make enough of the characteristic inferences associated with Turing computability, but not those for recursive-ness. (The characteristic inferences for "...is Turing computable" would be those that people call "immediate" from the definition of Turing computability, and ditto for the -different- characteristic inferences for recursive).
This is interesting because:
1. The characteristic inferences associated with a proposition/word will NOT supervene on the inferences which that proposition/word justifies. Since Turing computability and recursive-ness are probably equivalent, the very same inferences are JUSTIFIED for each one of them. But "This function is Turing computable" and "This function is recursive" need to have different characteristic inferences, to explain how you can know one but not the other.
2. Given (1), if you want to attach meanings to individual words, these meanings should not only include things like sense and reference which help build up the truth conditions for sentences involving that word, but also something like characteristic inferences, which helps you chose when to attribute someone a belief involving this word, rather than another which word would always contribute in exactly the same way to the truth conditions of any sentence.
2. It's commonly said that aliens would have the same math as us. If this means that they wouldn't disagree with us about math that sounds right. But if it means that they would (before contact with humans) believe literally the same propositions as we do, I don't think so.
For, think about all the many different notions we could define which would be equivalent to Turing computability, but have different characteristic inferences. If you buy the above, each of these notions corresponds to a slightly different thought. Thus for the aliens to believe the exact same mathematical claims as we do, they would have to have the same definitions/mathematical concepts. But it's much less clear whether aliens would have the same aesthetic sense guiding what definitions they made/mathematical concepts they came up with. For example, I'm much more convinced that aliens would accept topology than that they would have come up with it. I mean, just think about the different kinds of math developed just by humans in different eras and countries.
Freedom and Resentment in Epistemology
Everyone likes to talk about Neurath's boat, but I think common discussion leaves out something critical. Not only do we all start with some beliefs, but we also start out accepting certain methods of revising those beliefs, in response to new experience or in the course of further reflection. This is crucial because it brings out a deep symmetry between all believers:
At a certain level of description, there's no difference between the atheist philosopher who finds it immediately plausible that bread won't nourish us for a while and then suddenly poison us, and the religious person who finds it immediately plausible that god exists, or the madman who finds it immediately plausible that he's the victim of a massive conspiracy. Everyone involved is (just) starting with whatever they feel is initially plausible, and revising this in whatever ways they find immediately compelling.
Thinking about things this way, can make one feel uncomfortable in deploying normative notions of justification. Being justified is supposed to be a matter of (something like) doing the best you can, epistemically, whether or not you are lucky enough to be right. But there's no difference in effort (or even, perhaps, in care) between the philosopher and the madman. It's just that the philosopher is lucky enough to find immediately compelling principles *that happen to be mostly true*, and inference methods *that happen to be mostly truth-preserving/reliable*. So how can we say that one of them is justified?
One reaction to this is to deny that there is such a thing as epistemic normativity. There are facts about which people have true beliefs, and which of them are on course to form more true beliefs, which belief forming mechanisms are reliable (in various senses) etc. But there are no epistemically normative facts e.g. facts about which reliably true propositions are OK to to assume, or which reliable inference methods are OK to employ without any external testing.
Another possible reaction is to say that even though "ultimately" there's no difference between believing finding it obvious that bread will nourish you if it always has in the past vs. believing you are the center of a conspiracy, there still are facts about justification. We can pick out certain broad methods of reasoning (logical, empirical, analytic(??), initially trusting the results of putative senses) which are both popular and generally truth preserving, and what it means to be justified is just to have arrived at a belief via one of those.
In either case, the result will give an answer to philosophical skepticism. The skeptic asks: "how can you be justified in believing that you have a hand, given that it depends on your just assuming without proof that you aren't a BIV?" Someone who has the first reaction can simply deny the contentious facts about justification. Someone who has the second reaction will be unimpressed by the point that they are "just assuming" that ~BIV. All possible belief is a matter of starting out "just assuming" some propositions and inference methods, and then applying the one to the other.
At a certain level of description, there's no difference between the atheist philosopher who finds it immediately plausible that bread won't nourish us for a while and then suddenly poison us, and the religious person who finds it immediately plausible that god exists, or the madman who finds it immediately plausible that he's the victim of a massive conspiracy. Everyone involved is (just) starting with whatever they feel is initially plausible, and revising this in whatever ways they find immediately compelling.
Thinking about things this way, can make one feel uncomfortable in deploying normative notions of justification. Being justified is supposed to be a matter of (something like) doing the best you can, epistemically, whether or not you are lucky enough to be right. But there's no difference in effort (or even, perhaps, in care) between the philosopher and the madman. It's just that the philosopher is lucky enough to find immediately compelling principles *that happen to be mostly true*, and inference methods *that happen to be mostly truth-preserving/reliable*. So how can we say that one of them is justified?
One reaction to this is to deny that there is such a thing as epistemic normativity. There are facts about which people have true beliefs, and which of them are on course to form more true beliefs, which belief forming mechanisms are reliable (in various senses) etc. But there are no epistemically normative facts e.g. facts about which reliably true propositions are OK to to assume, or which reliable inference methods are OK to employ without any external testing.
Another possible reaction is to say that even though "ultimately" there's no difference between believing finding it obvious that bread will nourish you if it always has in the past vs. believing you are the center of a conspiracy, there still are facts about justification. We can pick out certain broad methods of reasoning (logical, empirical, analytic(??), initially trusting the results of putative senses) which are both popular and generally truth preserving, and what it means to be justified is just to have arrived at a belief via one of those.
In either case, the result will give an answer to philosophical skepticism. The skeptic asks: "how can you be justified in believing that you have a hand, given that it depends on your just assuming without proof that you aren't a BIV?" Someone who has the first reaction can simply deny the contentious facts about justification. Someone who has the second reaction will be unimpressed by the point that they are "just assuming" that ~BIV. All possible belief is a matter of starting out "just assuming" some propositions and inference methods, and then applying the one to the other.
Thursday, November 5, 2009
Funny Footnote
Reading Mark Steiner's "Mathematics-Applications and Applicability", I noticed this footnote:
"Suppose we have a physical theory, like string theory, which postulates a 26 dimensional space. The number 26 happens to be the numerical value of the Tetragrammaton in Hebrew. Should this encourage us to try other of the Hebrew Names of God?"
[Note: in context, Steiner seems to think the answer to this question is yes]
"Suppose we have a physical theory, like string theory, which postulates a 26 dimensional space. The number 26 happens to be the numerical value of the Tetragrammaton in Hebrew. Should this encourage us to try other of the Hebrew Names of God?"
[Note: in context, Steiner seems to think the answer to this question is yes]
Sunday, November 1, 2009
Three jobs for logical structure:
You might think the "logical structure" of a sentence is a way of cutting it up into parts [eg. "John is happy" becomes "is happy(john)"] that does three things:
1. gets used by the logical theory that best captures all the valid inferences.
2. matches the metaphysical structure of the world.
3. explains how we are able to understand that sentence, by breaking it down into these parts, and understanding them.
However, it's not obvious that the method of segmentation which does any one of these things best should also do the others. I don't mean that this idea is crazy, just that it is a bold and substantive claim that logic unites cognitive science with metaphysics in this way.
It's also not obvious that *any* method of segmentation can do 1 or 2.
Task 1 might be impossible to perform because there might not be a unique best logical theory. If we think that the job of logic is to capture necessarily truth-preserving inferences, then second-order logic is logic. Any recursive axiomatization of second order logic will be supplementable to produce a stronger one - since the truths of second order logic aren't recursively axiomatizable. (One might hope though, that all sufficiently strong logics that don't say anything wrong will segment sentences the same way)
Task 2 might be impossible because the world might not have a logical structure to reflect. What do I mean by the world "having a logical structure"? I think there are two versions of the claim:
a. The basic constituents of the world are divided between the various categories produced by the correct segmentation e.g. concepts and objects in Frege's case.
This is weird because "constituents of the world" sound like they should be all be objects. But presumably objects don't join together to produce a sentence, so the kind of expressions used in your chunking up can't all be objects.
Its also weird because it just seems immediately weird to think of the world as having this kind of propositional structure, rather than our just using different propositions with structure to describe the world.
b. The objects that really exist (as opposed to those that are merely a facon de parler), are exactly those which are quantified over by true statements when these are formalized in accordance with the best method of segmentation. To misquote Quine: "the correct logical theory is the one such that, to be, is to be the value of a bound variable in the formalization of some true sentence in accordance with that theory."
So, for example, if mathematical objects can't be paraphrased away in first order logic, but they can using modal logic, the question of whether mathematical objects exist will come down to which (if either) of these of logics has the correct segmentation.
Finally, Task 3 is ambiguous between something (imo) silly and something about neuroscience.
The silly thing is that a correct segmentation should reflect what components *you* break up the sentence "John is happy" into, when you hear and understand it (presumably none).
The neuroscience is `what components does *your brain* break up this sentence, when processing it to produce correct future behavior, give rise to suitable patterns of qualititative experience for you etc.?' This is obviously metaphorical, but I think it makes sense. It seems very likely that there will be some informative algorithm which we can use to describe what your brain does when processing sentences (it might or might not be the same algorithm for different people's brains). And, if so, it's likely that there will be some natural units which this algorithm uses.
1. gets used by the logical theory that best captures all the valid inferences.
2. matches the metaphysical structure of the world.
3. explains how we are able to understand that sentence, by breaking it down into these parts, and understanding them.
However, it's not obvious that the method of segmentation which does any one of these things best should also do the others. I don't mean that this idea is crazy, just that it is a bold and substantive claim that logic unites cognitive science with metaphysics in this way.
It's also not obvious that *any* method of segmentation can do 1 or 2.
Task 1 might be impossible to perform because there might not be a unique best logical theory. If we think that the job of logic is to capture necessarily truth-preserving inferences, then second-order logic is logic. Any recursive axiomatization of second order logic will be supplementable to produce a stronger one - since the truths of second order logic aren't recursively axiomatizable. (One might hope though, that all sufficiently strong logics that don't say anything wrong will segment sentences the same way)
Task 2 might be impossible because the world might not have a logical structure to reflect. What do I mean by the world "having a logical structure"? I think there are two versions of the claim:
a. The basic constituents of the world are divided between the various categories produced by the correct segmentation e.g. concepts and objects in Frege's case.
This is weird because "constituents of the world" sound like they should be all be objects. But presumably objects don't join together to produce a sentence, so the kind of expressions used in your chunking up can't all be objects.
Its also weird because it just seems immediately weird to think of the world as having this kind of propositional structure, rather than our just using different propositions with structure to describe the world.
b. The objects that really exist (as opposed to those that are merely a facon de parler), are exactly those which are quantified over by true statements when these are formalized in accordance with the best method of segmentation. To misquote Quine: "the correct logical theory is the one such that, to be, is to be the value of a bound variable in the formalization of some true sentence in accordance with that theory."
So, for example, if mathematical objects can't be paraphrased away in first order logic, but they can using modal logic, the question of whether mathematical objects exist will come down to which (if either) of these of logics has the correct segmentation.
Finally, Task 3 is ambiguous between something (imo) silly and something about neuroscience.
The silly thing is that a correct segmentation should reflect what components *you* break up the sentence "John is happy" into, when you hear and understand it (presumably none).
The neuroscience is `what components does *your brain* break up this sentence, when processing it to produce correct future behavior, give rise to suitable patterns of qualititative experience for you etc.?' This is obviously metaphorical, but I think it makes sense. It seems very likely that there will be some informative algorithm which we can use to describe what your brain does when processing sentences (it might or might not be the same algorithm for different people's brains). And, if so, it's likely that there will be some natural units which this algorithm uses.
Labels:
ontology,
philosophy of language,
philosophy of math
Is there a logic that...
Is there a logic that would capture inferences like:
-"John is very rich" --> "John is rich"
-"John is very very very very rich"--->"John is very rich"
Obviously it won't do to say "rich(John) ^ very (John)".
-"John is very rich" --> "John is rich"
-"John is very very very very rich"--->"John is very rich"
Obviously it won't do to say "rich(John) ^ very (John)".
Saturday, October 31, 2009
On Quantifying Over Everything
Consider the following argument against quantifying over everything.
"It can't be possible to quantify over everything, because if you did, there would have to be a set, your domain of quantification, which contained all objects as elements. However, this set would have to have to contain all the sets. But there can be no set of all sets, by Russell's paradox argument."
I claim it's unsound for the following reason:
We presumably can quantify over all the sets (e.g. when stating the axioms of set theory). So, if (as this argument assumes) quantifying over some objects required the existence of a set containing all the objects quantified over, we would already have a set containing all the sets, hence Russell's paradox and contradiction.
Thus, meaningfully making an assertion about all objects of a certain kind does NOT require that there's a set containing exactly these objects.
---
BONUS RANT: Why would one even think that where there is quantification there must be a set that's the domain of quantification? Because of getting over-excited about model theory I bet. [warning: wildly programmatic + underdeveloped claims to follow]
Model theory is just a branch of mathematics which studies systematic patterns relating what mathematical objects exist and and what statements are always/never true. It's not some kind of Tractarian voo-doo that `explains how it's possible for us to make claims about the world'. Nor do sets (e.g. countermodels) somehow actively pitch in and prevent claims like "Every dog has a bone" from expressing necessary truths!
"It can't be possible to quantify over everything, because if you did, there would have to be a set, your domain of quantification, which contained all objects as elements. However, this set would have to have to contain all the sets. But there can be no set of all sets, by Russell's paradox argument."
I claim it's unsound for the following reason:
We presumably can quantify over all the sets (e.g. when stating the axioms of set theory). So, if (as this argument assumes) quantifying over some objects required the existence of a set containing all the objects quantified over, we would already have a set containing all the sets, hence Russell's paradox and contradiction.
Thus, meaningfully making an assertion about all objects of a certain kind does NOT require that there's a set containing exactly these objects.
---
BONUS RANT: Why would one even think that where there is quantification there must be a set that's the domain of quantification? Because of getting over-excited about model theory I bet. [warning: wildly programmatic + underdeveloped claims to follow]
Model theory is just a branch of mathematics which studies systematic patterns relating what mathematical objects exist and and what statements are always/never true. It's not some kind of Tractarian voo-doo that `explains how it's possible for us to make claims about the world'. Nor do sets (e.g. countermodels) somehow actively pitch in and prevent claims like "Every dog has a bone" from expressing necessary truths!
Is "Set" Vague?
The (normal) intuitive conception of the hierarchy of sets is roughly this:
The hierarchy starts with the empty set, at the bottom. Then, above every collection of sets at one stage, there's a successor stage containing all possible collections made entirely from elements produced at that stage. And, above every such infinite run of successor stages, there's a limit stage, which has no predecessor, but contains all possible sub-collections whose elements have already been generated at some stage below.
But how far up does this hierarchy of sets go? Is there a fact of the matter, or does our conception not determine this?
The conception/our intuitions about sets doesn't directly tell us when to stop. For any stages we suppose we are looking at, it always seems to make sense to think of new collections that contain only sets generated by that point (e.g. the collection of all things so far generated). Of the sets generated by any collection of stages we can ask:
- Does the proposed next stage/limit stage of these stages really make sense? Are there really such collections?
- If so, are the collections generated at this stage still sets?
A textbook will tell you that at some point the things generated by the process above DO make sense, but DON'T count as sets. So, for example, there is a collection of all sets, but (on pain of paradox) this is not itself a set, but only a class.
However,
a) This just pushes the philosophical question back to classes: is there a point at which there stop being classes? Are there something else (classes2) which have the same relation to sets as sets do to classes? [One of my advisors calls this the "neopolitan" view of set theory]
b) We don't have any idea of WHEN the things generated by the process above are supposed to stop counting as sets.
Note that the issue with b) is not just that we don't know whether sets of a certain size exist. There are lots of things about math we don't know, and (imo) could never know. Rather, the uneasy feeling is that our conception doesn't "determine" an answer to this question in the following much stronger sense:
There could be two collections of mathematical objects with different structures, each of which equally well satisfies our intuitive conception of set.
For, consider the hierarchy of classes (note: all sets are classes). There might be two different ways of painting the hierarchy to say at what point the items in it stop counting as sets. Our intuitive conception just seems to generate the hierarchy of classes, not to say when things in it stop being sets!
In contrast, in the case of the numbers, I might not know whether there are infinitely many twin primes, but any two objects satisfying the intuitive, second order, characterization of the numbers would have to have the same structure (and hence make all the same statements of arithmetic true).
Thus, our intuitive conception of set seems to be hopelessly vague about where the sets end. Hence, even if you are a realist about mathematical objects, we seem forced to understand set theory as making claims about features shared by everything that satisfies the intuitive conception of set, rather than as making claims about a unique object.
Questions:
1. If you buy the reasoning in the main body of this post, does it give an advantage to modal fictionalism? e.g. the modal fictionalist might say: "You already need to agree with us that doing set theory is a matter of reasoning about what objects satisfying the intuitive conception of set would have to be like. What does incurring extra commitment to the actual existence of mathematical objects (as opposed to their mere possibility) do for you?".
2. An alternative would be to reject the textbook view, and say EVERYTHING generated by the process above is a set. Hence, you couldn't talk about a class of sets. Would this be a problem?
3. [look up] Is it possible that all initial segments of the hierarchy of classes that reach up to a certain point are isomorphic? (I mean, the mere existence of one-to-one, membership preserving, function that's into but not onto -the identity- doesn't immediately guarantee that some *other*, more clever, function that IS an isomorphism)
[maybe you can prove this is not poss by using the fact that one initial segment would have extra ordinals, and this iso could be used to define an iso between ordinals of different sizes which is impossible]
4. Is there some weirdness about the idea that collections in general (whether they be sets, or classes) eventually give out - so there's no collection of all collections.
We could say there are sets, classes, classes2, classes 3 and so forth. This lets us say there's a class of all sets, and a class2 of all classes etc. But as far as collections in general we must admit that there's no collection of all collections, on pain of contradiction via Russell's paradox.
Well, I don't personally find this that problematic. It's a surprising fact about collections maybe, but mathematics often yields surprising results.
The hierarchy starts with the empty set, at the bottom. Then, above every collection of sets at one stage, there's a successor stage containing all possible collections made entirely from elements produced at that stage. And, above every such infinite run of successor stages, there's a limit stage, which has no predecessor, but contains all possible sub-collections whose elements have already been generated at some stage below.
But how far up does this hierarchy of sets go? Is there a fact of the matter, or does our conception not determine this?
The conception/our intuitions about sets doesn't directly tell us when to stop. For any stages we suppose we are looking at, it always seems to make sense to think of new collections that contain only sets generated by that point (e.g. the collection of all things so far generated). Of the sets generated by any collection of stages we can ask:
- Does the proposed next stage/limit stage of these stages really make sense? Are there really such collections?
- If so, are the collections generated at this stage still sets?
A textbook will tell you that at some point the things generated by the process above DO make sense, but DON'T count as sets. So, for example, there is a collection of all sets, but (on pain of paradox) this is not itself a set, but only a class.
However,
a) This just pushes the philosophical question back to classes: is there a point at which there stop being classes? Are there something else (classes2) which have the same relation to sets as sets do to classes? [One of my advisors calls this the "neopolitan" view of set theory]
b) We don't have any idea of WHEN the things generated by the process above are supposed to stop counting as sets.
Note that the issue with b) is not just that we don't know whether sets of a certain size exist. There are lots of things about math we don't know, and (imo) could never know. Rather, the uneasy feeling is that our conception doesn't "determine" an answer to this question in the following much stronger sense:
There could be two collections of mathematical objects with different structures, each of which equally well satisfies our intuitive conception of set.
For, consider the hierarchy of classes (note: all sets are classes). There might be two different ways of painting the hierarchy to say at what point the items in it stop counting as sets. Our intuitive conception just seems to generate the hierarchy of classes, not to say when things in it stop being sets!
In contrast, in the case of the numbers, I might not know whether there are infinitely many twin primes, but any two objects satisfying the intuitive, second order, characterization of the numbers would have to have the same structure (and hence make all the same statements of arithmetic true).
Thus, our intuitive conception of set seems to be hopelessly vague about where the sets end. Hence, even if you are a realist about mathematical objects, we seem forced to understand set theory as making claims about features shared by everything that satisfies the intuitive conception of set, rather than as making claims about a unique object.
Questions:
1. If you buy the reasoning in the main body of this post, does it give an advantage to modal fictionalism? e.g. the modal fictionalist might say: "You already need to agree with us that doing set theory is a matter of reasoning about what objects satisfying the intuitive conception of set would have to be like. What does incurring extra commitment to the actual existence of mathematical objects (as opposed to their mere possibility) do for you?".
2. An alternative would be to reject the textbook view, and say EVERYTHING generated by the process above is a set. Hence, you couldn't talk about a class of sets. Would this be a problem?
3. [look up] Is it possible that all initial segments of the hierarchy of classes that reach up to a certain point are isomorphic? (I mean, the mere existence of one-to-one, membership preserving, function that's into but not onto -the identity- doesn't immediately guarantee that some *other*, more clever, function that IS an isomorphism)
[maybe you can prove this is not poss by using the fact that one initial segment would have extra ordinals, and this iso could be used to define an iso between ordinals of different sizes which is impossible]
4. Is there some weirdness about the idea that collections in general (whether they be sets, or classes) eventually give out - so there's no collection of all collections.
We could say there are sets, classes, classes2, classes 3 and so forth. This lets us say there's a class of all sets, and a class2 of all classes etc. But as far as collections in general we must admit that there's no collection of all collections, on pain of contradiction via Russell's paradox.
Well, I don't personally find this that problematic. It's a surprising fact about collections maybe, but mathematics often yields surprising results.
Relations vs. Sets of Ordered Pairs
(Normally in math) a relation is defined to be a sets of ordered pairs.
But the `elementhood' relation between sets can't, itself, be a set of ordered pairs - since there can't be a set which contains each ordered pair of sets such that x is an element of y. [From the existence of such a set you could use the axiom of collection in ZF to derive the existence of a set of all sets, and hence the Russell set and contradiction.]
Therefore, not all relations (in the ordinary sense) are sets of ordered pairs (i.e. relations in the mathematical sense).
But the `elementhood' relation between sets can't, itself, be a set of ordered pairs - since there can't be a set which contains each ordered pair of sets
Therefore, not all relations (in the ordinary sense) are sets of ordered pairs (i.e. relations in the mathematical sense).
Friday, October 30, 2009
Thin Realism #2
Hmm on further reflection, `thin realism' is just Lumpism.
So see the essay above for why Lumpism is right and all that seductive stuff about the world having a logical structure/existence claims having a special epistemological status is wrong.
So see the essay above for why Lumpism is right and all that seductive stuff about the world having a logical structure/existence claims having a special epistemological status is wrong.
"Thin Realism" - what could it be?
I always want to say "I think there are numbers - but I understand existence in a thin logical sense". But I feel kindof dishonest saying this. It's too much like the sleazy "Of course P - but I don't mean that in any deep philosophical way" which happens when Wittgensteinians get lazy.
So here are some actual concrete ways in which I differ from other platonists (i.e. other people who believe there are mathematical objects).
1. I don't think we need to posit numbers to explain how there can be unknowable mathematical facts.
2. I think fictionalism/if-then-sim is is perfectly coherent. We could have had a mathematical practice which was completely based around mathematical properties, and studying their relaitons to one another e.g.: `Insofar as anything heirarchy-of-sets-izes, it's mathematically necessary that it satisfies the contiuum hypothesis'.
And here's an attempt to say what having only "a thin logical notion of existence" means:
When we ask what objects exist, this is equivalent to asking what sentences with a given logical form (Ex) Fx are true. So far, this is just Quinean orthodoxy.
But now the question is: what makes a given sentence (say, of English) have a certain logical form?
Now, I think having existential form is just a matter of what inferences can be made with that sentence, and what other -contrasting- sentences are in the language. We cook up various logical categories in order to best represent, and exploit, patterns in which inferences are truth preserving. Furthermore, there's noting special about objects, and object expressions. Each component of a sentence (be it concept-word, object-word, connective or opporator) makes a systematic contribution to the truth conditions of the sentences it figures in (i.e. the class of possible situations where the sentence is true).
On this view, choices about the logical form of a sentence wind up not being very deep - the question is just what's the most elegant way to capture certain inference relations.
In contrast, (I propose) having a "thick" notion of objecthood and existence, means thinking that there IS something more than elegant summary of inference relations at stake when we decide how to cut sentences up into concepts and objects. For example, you might think
1. It's easy to learn statements which don't imply that any objects exist (all bachelors are unmarried), whereas learning statements that do imply the existence of at least one object (there are some bachelors) is harder.
2. The *world* has a logical structure too! - so the most elegant way of cutting up your sentences to capture inference relations might still be wrong, because it fails to respect the logical structure of the world.
[Oh yes, they are kindof seductive. More about why they are wrong later.]
So here are some actual concrete ways in which I differ from other platonists (i.e. other people who believe there are mathematical objects).
1. I don't think we need to posit numbers to explain how there can be unknowable mathematical facts.
2. I think fictionalism/if-then-sim is is perfectly coherent. We could have had a mathematical practice which was completely based around mathematical properties, and studying their relaitons to one another e.g.: `Insofar as anything heirarchy-of-sets-izes, it's mathematically necessary that it satisfies the contiuum hypothesis'.
And here's an attempt to say what having only "a thin logical notion of existence" means:
When we ask what objects exist, this is equivalent to asking what sentences with a given logical form (Ex) Fx are true. So far, this is just Quinean orthodoxy.
But now the question is: what makes a given sentence (say, of English) have a certain logical form?
Now, I think having existential form is just a matter of what inferences can be made with that sentence, and what other -contrasting- sentences are in the language. We cook up various logical categories in order to best represent, and exploit, patterns in which inferences are truth preserving. Furthermore, there's noting special about objects, and object expressions. Each component of a sentence (be it concept-word, object-word, connective or opporator) makes a systematic contribution to the truth conditions of the sentences it figures in (i.e. the class of possible situations where the sentence is true).
On this view, choices about the logical form of a sentence wind up not being very deep - the question is just what's the most elegant way to capture certain inference relations.
In contrast, (I propose) having a "thick" notion of objecthood and existence, means thinking that there IS something more than elegant summary of inference relations at stake when we decide how to cut sentences up into concepts and objects. For example, you might think
1. It's easy to learn statements which don't imply that any objects exist (all bachelors are unmarried), whereas learning statements that do imply the existence of at least one object (there are some bachelors) is harder.
2. The *world* has a logical structure too! - so the most elegant way of cutting up your sentences to capture inference relations might still be wrong, because it fails to respect the logical structure of the world.
[Oh yes, they are kindof seductive. More about why they are wrong later.]
Unknowable Truths without Objects
I believe in mathematical objects, but I think the following appeal to them is dead wrong:
"The existence of mathematical objects is what allows there to be unknowable mathematical truths, whereas there are no unknowable logical or `conceptual' truths."
Corresponding to every unknowable AxFx statement in arithmetic, there's a purely modal statement, that's not ontologically commital, but would let you infer the arithmetical statement and hence must be equally unknowable, namely:
"It is impossible for there to be a machine on an infinite tape which a) acts in such and such such-and-such a physically specified way (here we have we list physical correlates of rules for some Turing machine program that checks every instance of the AxFx statement), and b) stops."
"The existence of mathematical objects is what allows there to be unknowable mathematical truths, whereas there are no unknowable logical or `conceptual' truths."
Corresponding to every unknowable AxFx statement in arithmetic, there's a purely modal statement, that's not ontologically commital, but would let you infer the arithmetical statement and hence must be equally unknowable, namely:
Thursday, October 22, 2009
Why I am not Carrie Jenkins
Carrie Jenkins' 2009 book Grounding Concepts: an Empirical Basis for Arithmetical Knowledge, proposes a theory that has a lot in common with my thesis project.
Both of us:
- want to give a naturalistic account of mathematical knowledge
- in particular, want to explain how humans can have managed get "good" combination of inference patterns that count as thinking true things about some domain of mathematical objects/having a coherent conception of what those objects must be like, rather than "bad", 'tonk' like patterns of reasoning.
-appeal to causal interactions with the world, to explain how we wind up with such combinations of inference dispositions.
BUT there are some important differences. Here's why (I claim) my view is better.
Jenkins' theory:
Jenkins winds up positing a whole bunch of controversial, and perhaps under-explained philosophical notions to account for how experience gives us good inference dispositions. She proposes that:
Experience has non-conceptual content which grounds our acquisition of concepts so as to help us form coherent ones. Then when we have a coherent concept of something like the numbers, we inspect it to see what what must be true of the numbers and reason correctly about them.
-The idea that there's non-conceptual content is a controversial point in philosophy of perception.
-The idea that experience can "ground" concept acquisition without playing a justificatory role in the conclusions drawn is not at all clear. What is this not-justificatory, but presumably not just causal relationship of grounding supposed to be? (Kant's notion of a posteriori concepts seems relevant, but that's none-too clear either).
-Finally, what is concept inspection, (presumably you don't literally visit the 3rd realm and see the concepts) and how is it supposed to work? Jenkins admits that this is an open question for further research.
My theory:
In contrast, my view gives a naturalistic account of mathematical knowledge that doesn't need any of this controversial philosophical machinery. I propose that:
Note that we don't need to posit any mysterious faculty of concept-inspection, or any controversial non-conceptual experience. All I appeal to is perfectly ordinary processes. People go from one sentence to another in a way that feels natural them (whether or not they are so fortunate as to be working with coherent concepts like +, rather than doing reasoning like Frege did about extensions) And when this natural-feeling reasoning leads to a surprise, they revise.
[Well, perhaps I'm also committed to the view that innate stuff about the brain makes some ways of revising more likely than others, and certain initial inference-dispositions more likely than others, in a way that doesn't make us always prefer theories that are totally hopeless at matching future experience. But you already need something like this even to explain how rats can learn that pushing a lever releases food, so I don't think this is very controversial.]
Both of us:
- want to give a naturalistic account of mathematical knowledge
- in particular, want to explain how humans can have managed get "good" combination of inference patterns that count as thinking true things about some domain of mathematical objects/having a coherent conception of what those objects must be like, rather than "bad", 'tonk' like patterns of reasoning.
-appeal to causal interactions with the world, to explain how we wind up with such combinations of inference dispositions.
BUT there are some important differences. Here's why (I claim) my view is better.
Jenkins' theory:
Jenkins winds up positing a whole bunch of controversial, and perhaps under-explained philosophical notions to account for how experience gives us good inference dispositions. She proposes that:
Experience has non-conceptual content which grounds our acquisition of concepts so as to help us form coherent ones. Then when we have a coherent concept of something like the numbers, we inspect it to see what what must be true of the numbers and reason correctly about them.
-The idea that there's non-conceptual content is a controversial point in philosophy of perception.
-The idea that experience can "ground" concept acquisition without playing a justificatory role in the conclusions drawn is not at all clear. What is this not-justificatory, but presumably not just causal relationship of grounding supposed to be? (Kant's notion of a posteriori concepts seems relevant, but that's none-too clear either).
-Finally, what is concept inspection, (presumably you don't literally visit the 3rd realm and see the concepts) and how is it supposed to work? Jenkins admits that this is an open question for further research.
My theory:
In contrast, my view gives a naturalistic account of mathematical knowledge that doesn't need any of this controversial philosophical machinery. I propose that:
People are disposed to go from seeing things, to saying things, to being surprised if we then see other things, in certain ways. When these inference dispositions lead us to be surprised, we tend to modify them.
Thus, it's not surprising that we should have wound up with the kind of combination of arithmetical inference dispositions + observational practices + ways of applying arithmetic to the actual world, which makes our expected applications of arithmetic work out.
For example: insofar as we had a conceptions of the numbers which included the expectation that facts about sums should mirror logical facts in a certain way, it's not surprising that we would up also believing the kinds of other claims about sums, which make the intended applications to logic work out (e.g. believing 2+2=4 not 2+2=5).
Note that we don't need to posit any mysterious faculty of concept-inspection, or any controversial non-conceptual experience. All I appeal to is perfectly ordinary processes. People go from one sentence to another in a way that feels natural them (whether or not they are so fortunate as to be working with coherent concepts like +, rather than doing reasoning like Frege did about extensions) And when this natural-feeling reasoning leads to a surprise, they revise.
[Well, perhaps I'm also committed to the view that innate stuff about the brain makes some ways of revising more likely than others, and certain initial inference-dispositions more likely than others, in a way that doesn't make us always prefer theories that are totally hopeless at matching future experience. But you already need something like this even to explain how rats can learn that pushing a lever releases food, so I don't think this is very controversial.]
Tuesday, October 20, 2009
Mathematical Concepts and Learning From Experience
I've been reading Susan Carey's new book on the development of concepts, which features a lot of interesting stuff about the development of children's reasoning about number. The last two chapters are philosophical though, and bring up an important point, which it had not occurred to me needed to be stressed:
Learning from experience need not take the form of someone explicitly forming a hypothesis, and then letting experience falsify it/doing induction to conclude the hypothesis is true.
If this were all that experience could do, it would be hopeless to appeal to it to help explain how we could get mathematical knowledge. For, plausibly, you only count as having the concept of number, once you are willing to make certain kinds of applications of facts about the numbers, reason about the numbers largely correctly etc. So, by the time that experience could falsify hypotheses containing the mature concept of number, you would already have to have lots of mathematical knowledge.
Instead, experience helps us correct and hone our mathematical reasoning all through the process of "developing a concept". How can this be?
Well, firstly, think about the way students are normally introduced to the concept of set. No one makes a hypothesis that there are sets, nor do math profs attempt to define sets in other terms. Rather the professor just demonstrates various ways of reasoning about sets, ways of using these claims to solve other mathematical problems etc. and gets the students to practice. Given this, the student's usage and intuitions conform more and more to standard claims about the sets, and eventually they count as having the concept of set.
I propose (and I think Carey would agree) that the original development of many concepts in mathematics works similarly, only with trial and experience playing the role of the teacher.
You start out not having the concept, and try various usages. Here, however, rather than having a professor to imitate, you just have your general creativity/trial and error/analogical reasoning to suggest ways of reasoning about "the X"s and then an ability to check whatever kinds of consequences and applications you expect at a given time. Often this kind of creative trying and analogical reasoning will turn out to fail in some way, such as leading to contrdiction, or underspecifying something important. But then you can correct it. Inconsistent reasoning about limits in the 19th century and sets in the early 20th would be examples of the former. And the kind of process of refinment of the notion of polygon in Lakotosh's Proofs and Refutations would be an example of the latter.
We try out various patterns of reasoning about the world (e.g. calling certain things Xs, trying to apply the analogue of good reasoning about one domain to another) -with perhaps a nudge from brain structures subject to evolution effecting which patterns we are likely to try- and experience corrects these inference patterns until they cohere enough that we count as genuinely having some new concept. And note that no conscious scientific reasoning must be assumed to start this process, all we need some disposition to go from seeing things to making noises to doing things, together with a playful/random/creative inclination to try extending those dispositions in various ways!
p.s. I haven't emphasized this point the past, because I think questions like 'when exactly does someone start having the concept of X?', don't generally cut psychology or metaphysics at their joints. I mean: when exactly did people start having the modern conception of atom? The interesting facts are surely facts about when people started accepting this or that idea atoms "atoms", or reasoning about "atoms" in this or that way. Coming up with a decision about exactly what amount of agreement with us is necessary for people to count as having the same concept is a matter of arbitrary boundary setting.
But I realize now that ignoring the whole issue of concepts can be confusing. So let me just say:
When I say mathematical knowledge is a joint product of mathematically shapted problems in nature, correction by experience, the wideness of the realm of mathematical facts and the relationship between use and meaning, "Correction by experience" doesn't just mean what happens when hypotheses consciously proposed by people who already count as having all the right mathematical concepts get refuted. Rather, "correction by experience" includes what happens when you are inclined to reason some way, you get to an unexpected conclusion, and then subsequently become disposed to draw slightly different inferences/feel less confident when engaging in some of the processes that lead you there. You might or might not count as revising some hypothesis, phrased in terms of fully coherent concepts, when you do this.
p.p.s. The idea that experience helps us form coherent mathematical concepts, (while not figuring in the justification of our beliefs) is also a central theme in Carrie Jenkins' 2009 Grounding Concepts: an empirical basis for arithmetical knowledge.
Learning from experience need not take the form of someone explicitly forming a hypothesis, and then letting experience falsify it/doing induction to conclude the hypothesis is true.
If this were all that experience could do, it would be hopeless to appeal to it to help explain how we could get mathematical knowledge. For, plausibly, you only count as having the concept of number, once you are willing to make certain kinds of applications of facts about the numbers, reason about the numbers largely correctly etc. So, by the time that experience could falsify hypotheses containing the mature concept of number, you would already have to have lots of mathematical knowledge.
Instead, experience helps us correct and hone our mathematical reasoning all through the process of "developing a concept". How can this be?
Well, firstly, think about the way students are normally introduced to the concept of set. No one makes a hypothesis that there are sets, nor do math profs attempt to define sets in other terms. Rather the professor just demonstrates various ways of reasoning about sets, ways of using these claims to solve other mathematical problems etc. and gets the students to practice. Given this, the student's usage and intuitions conform more and more to standard claims about the sets, and eventually they count as having the concept of set.
I propose (and I think Carey would agree) that the original development of many concepts in mathematics works similarly, only with trial and experience playing the role of the teacher.
You start out not having the concept, and try various usages. Here, however, rather than having a professor to imitate, you just have your general creativity/trial and error/analogical reasoning to suggest ways of reasoning about "the X"s and then an ability to check whatever kinds of consequences and applications you expect at a given time. Often this kind of creative trying and analogical reasoning will turn out to fail in some way, such as leading to contrdiction, or underspecifying something important. But then you can correct it. Inconsistent reasoning about limits in the 19th century and sets in the early 20th would be examples of the former. And the kind of process of refinment of the notion of polygon in Lakotosh's Proofs and Refutations would be an example of the latter.
We try out various patterns of reasoning about the world (e.g. calling certain things Xs, trying to apply the analogue of good reasoning about one domain to another) -with perhaps a nudge from brain structures subject to evolution effecting which patterns we are likely to try- and experience corrects these inference patterns until they cohere enough that we count as genuinely having some new concept. And note that no conscious scientific reasoning must be assumed to start this process, all we need some disposition to go from seeing things to making noises to doing things, together with a playful/random/creative inclination to try extending those dispositions in various ways!
p.s. I haven't emphasized this point the past, because I think questions like 'when exactly does someone start having the concept of X?', don't generally cut psychology or metaphysics at their joints. I mean: when exactly did people start having the modern conception of atom? The interesting facts are surely facts about when people started accepting this or that idea atoms "atoms", or reasoning about "atoms" in this or that way. Coming up with a decision about exactly what amount of agreement with us is necessary for people to count as having the same concept is a matter of arbitrary boundary setting.
But I realize now that ignoring the whole issue of concepts can be confusing. So let me just say:
When I say mathematical knowledge is a joint product of mathematically shapted problems in nature, correction by experience, the wideness of the realm of mathematical facts and the relationship between use and meaning, "Correction by experience" doesn't just mean what happens when hypotheses consciously proposed by people who already count as having all the right mathematical concepts get refuted. Rather, "correction by experience" includes what happens when you are inclined to reason some way, you get to an unexpected conclusion, and then subsequently become disposed to draw slightly different inferences/feel less confident when engaging in some of the processes that lead you there. You might or might not count as revising some hypothesis, phrased in terms of fully coherent concepts, when you do this.
p.p.s. The idea that experience helps us form coherent mathematical concepts, (while not figuring in the justification of our beliefs) is also a central theme in Carrie Jenkins' 2009 Grounding Concepts: an empirical basis for arithmetical knowledge.
Labels:
philosophy of language,
philosophy of math,
thesis
Friday, October 16, 2009
Empirical adequacy and truth in mathematics
The current weakest link in my thesis is this (IMO): how to connect merely having beliefs about mathematics that help us solve problems, and yield correct applications to concrete situations to having beliefs about mathematics that are reasonably reliable.
Couldn't totally false mathematical theories nonetheless be perfectly correct with regard to their concrete applications?
Also, even if our beliefs would indeed perfectly accurately describe some concrete objects, how can we count as refering to these objects, given that we have no causal contact with them?
My current best answer is this:
Think of human mathematicians as observing certain regularities (e.g. whenever there are 2 male rhymes and 2 female rhymes in a poem there are at least 4 rhymes all together), and then positing mathematical objects "the numbers" whose relationship to one another is supposed to echo these logical facts.
(This is a reasonable comparison because what we actually do is like this, in that we happily make inferences from a proof that "a+b=c" to the expectation that when there are a male rhymes and b female rhymes there are c rhymes all together. We behave as though we know there's this relationship between the numbers and logical facts, so it's not too much of a stretch to compare us to people who actually consciously posit that there is some collection of abstract objects whose features echo the relevant logical facts in this way.)
Now either there are abstract objects or not.
If there aren't abstracta (as the fictionalist thinks), the fact that mathematicians only care about structures makes it plausible to think of them as talking about the fiction in which there are such objects.
Thus, our abstract-object positing mathematicians will count as speaking about the fiction in which there are objects whose features echo the logical facts about addition in the intended way. They will also count as knowing lots of things about what's true in this fiction.
Also, note that insofar as these mathematicians propose new things that "intuitively must be true of the numbers" their intuitions will be disciplined and corrected by the fact that the relevant applications are expected, so there's a systematic force which will keep some degree of match between their claims about this fiction and what's actually true in this fiction.
If there are abstracta, then there are abstract objects with many different structures, in particular structures corresponding to every consistent first order theory (note this is even true if the only mathematical objects there are are sets! the completeness theorem guarantees that there are models of every such theory within the heirarchy sets). So there will be some collection of objects whose features match those expected by our positers (note that the positers only really care about structural features of "the numbers" not whether they are fundamental mathematical objects etc).
Now, how can our positers count as referring to some such objects? Well, as noted above, we have systematic mechanisms of belief revision which kick back and insure that their claims about the numbers must match with logical facts, and hence with the real facts about these collections of suitable abstracta. Just as looking at llamas helps ensure that certain kinds of false beliefs about llamas which you might form would be corrected, applying arithmetic insures that certain kinds of false general beliefs you might form about the numbers would be corrected (those which lead to false consequences about sums).
Thus, we have a situation where people not only have many beliefs that are true about the numbers, and the tendency to make many truth-preserving inferences, but also where these beliefs have a certain amount of modal stability (many kinds of false beliefs would tend to be corrected). Even Fodor thinks that making correct inferences with or is sufficient to allow or to make the right kind of contribution to the truth value of your sentences, so why should the same thing not apply to talk about numbers, given that we now have not only many good inferences but this kind of mechanism of correction which improves the fit between our beliefs about the numbers and the numbers?
You might still worry that there will be so many mathematical objects which have all the features which we expect the numbers to have - how can we count as referring to any one such structure, given that our use fits all of them equally well? And if we don't uniquely pick out a structure, how can our words count as refering and being meaningful? But note that to the extent that our use of the word "the numbers" is somehow ambiguous between e.g. different collections of sets, our use of the word "human bodies" would seem to be equally ambiguous between e.g. open vs. closed sets of spacetime points. So either meaningfully talking about objects is compatible with some amount of ambiguity, or the above kind of reasoning doesn't suffice to establish ambiguity.
Couldn't totally false mathematical theories nonetheless be perfectly correct with regard to their concrete applications?
Also, even if our beliefs would indeed perfectly accurately describe some concrete objects, how can we count as refering to these objects, given that we have no causal contact with them?
My current best answer is this:
Think of human mathematicians as observing certain regularities (e.g. whenever there are 2 male rhymes and 2 female rhymes in a poem there are at least 4 rhymes all together), and then positing mathematical objects "the numbers" whose relationship to one another is supposed to echo these logical facts.
(This is a reasonable comparison because what we actually do is like this, in that we happily make inferences from a proof that "a+b=c" to the expectation that when there are a male rhymes and b female rhymes there are c rhymes all together. We behave as though we know there's this relationship between the numbers and logical facts, so it's not too much of a stretch to compare us to people who actually consciously posit that there is some collection of abstract objects whose features echo the relevant logical facts in this way.)
Now either there are abstract objects or not.
If there aren't abstracta (as the fictionalist thinks), the fact that mathematicians only care about structures makes it plausible to think of them as talking about the fiction in which there are such objects.
Thus, our abstract-object positing mathematicians will count as speaking about the fiction in which there are objects whose features echo the logical facts about addition in the intended way. They will also count as knowing lots of things about what's true in this fiction.
Also, note that insofar as these mathematicians propose new things that "intuitively must be true of the numbers" their intuitions will be disciplined and corrected by the fact that the relevant applications are expected, so there's a systematic force which will keep some degree of match between their claims about this fiction and what's actually true in this fiction.
If there are abstracta, then there are abstract objects with many different structures, in particular structures corresponding to every consistent first order theory (note this is even true if the only mathematical objects there are are sets! the completeness theorem guarantees that there are models of every such theory within the heirarchy sets). So there will be some collection of objects whose features match those expected by our positers (note that the positers only really care about structural features of "the numbers" not whether they are fundamental mathematical objects etc).
Now, how can our positers count as referring to some such objects? Well, as noted above, we have systematic mechanisms of belief revision which kick back and insure that their claims about the numbers must match with logical facts, and hence with the real facts about these collections of suitable abstracta. Just as looking at llamas helps ensure that certain kinds of false beliefs about llamas which you might form would be corrected, applying arithmetic insures that certain kinds of false general beliefs you might form about the numbers would be corrected (those which lead to false consequences about sums).
Thus, we have a situation where people not only have many beliefs that are true about the numbers, and the tendency to make many truth-preserving inferences, but also where these beliefs have a certain amount of modal stability (many kinds of false beliefs would tend to be corrected). Even Fodor thinks that making correct inferences with or is sufficient to allow or to make the right kind of contribution to the truth value of your sentences, so why should the same thing not apply to talk about numbers, given that we now have not only many good inferences but this kind of mechanism of correction which improves the fit between our beliefs about the numbers and the numbers?
You might still worry that there will be so many mathematical objects which have all the features which we expect the numbers to have - how can we count as referring to any one such structure, given that our use fits all of them equally well? And if we don't uniquely pick out a structure, how can our words count as refering and being meaningful? But note that to the extent that our use of the word "the numbers" is somehow ambiguous between e.g. different collections of sets, our use of the word "human bodies" would seem to be equally ambiguous between e.g. open vs. closed sets of spacetime points. So either meaningfully talking about objects is compatible with some amount of ambiguity, or the above kind of reasoning doesn't suffice to establish ambiguity.
Beliefs, natural kinds, and causation
I think that having the belief that there's a rabbit in the yard is a matter of having some suitable combination of dispositions to action, dispositions to experience qualia, relations to the external world ect. (roughly: those that would make an omniscient Davidsonain charitable interpreter attribute you the belief that there's a rabbit in the yard)
But (I think) exactly which dispositions etc. are required is quite complicated, and in some respects arbitrary (e.g. verbal behavior that would equally well track the facts about rabbits and undetached rabbit parts counts as referring to rabbits).
Does this view that 'believing that there's a rabbit in the yard' may not pick out any supremely natural combination of mental states, prevent me from saying that beliefs can cause things?
No.
The facts about what physical combinations of stuff count as a baseball are equally complicated and arbitrary. But no one would deny that baseballs can figure in causal explanations e.g. the window broke because someone threw a baseball at it.
Just as the somewhat arbitrary fact that a regulation baseball has to have a diameter of between two and seven-eighths inches and three inches doesn't prevent talk of baseballs from figuring in causal claims, the somewhat arbitrary fact that it's easier to count as referring to/thinking about rabbits rather than undetached rabbit parts doesn't prevent talk of beliefs from figuring in causal claims.
But (I think) exactly which dispositions etc. are required is quite complicated, and in some respects arbitrary (e.g. verbal behavior that would equally well track the facts about rabbits and undetached rabbit parts counts as referring to rabbits).
Does this view that 'believing that there's a rabbit in the yard' may not pick out any supremely natural combination of mental states, prevent me from saying that beliefs can cause things?
No.
The facts about what physical combinations of stuff count as a baseball are equally complicated and arbitrary. But no one would deny that baseballs can figure in causal explanations e.g. the window broke because someone threw a baseball at it.
Just as the somewhat arbitrary fact that a regulation baseball has to have a diameter of between two and seven-eighths inches and three inches doesn't prevent talk of baseballs from figuring in causal claims, the somewhat arbitrary fact that it's easier to count as referring to/thinking about rabbits rather than undetached rabbit parts doesn't prevent talk of beliefs from figuring in causal claims.
Explaining vs. justifying beliefs
Suppose I say that there's a fire in my room, and then you ask me why I believe there's a fire in my room. I could give a causal explanation for my belief (e.g. 'Some light bounced off a fire and this hit my eyes causing such-and-such brain changes in me) or I could try to justify the claim (e.g.'I seem to see a fire, and I don't tend to hallucinate').
These are two very different things! Thus, I think it's totally wrong to assume that the (potentially infinite series of) other beliefs I might express if asked to justify my claim that there's a fire in my room, somehow figured in causing the belief. If anything, these extra beliefs are probably simultanious results of a common cause, namely the fire.
Fire
-causes->
Light hits my retina
-simultaniously-causes->
I believe there's a fire.
I believe that I seem to see a fire.
I believe that I seem to seem to see a fire.
...
This is not to deny that beliefs CAN cause beliefs though, as in the case of conscious, Sherlock-Holmes-style chains of inference. Also the absence of certain beliefs might be necessary for the production of other beliefs (e.g. the absence of the belief that I have taken fire-hallucination causing drugs, might be required for causal stimulation by light from a fire to cause me to form the belief that there's a fire)
These are two very different things! Thus, I think it's totally wrong to assume that the (potentially infinite series of) other beliefs I might express if asked to justify my claim that there's a fire in my room, somehow figured in causing the belief. If anything, these extra beliefs are probably simultanious results of a common cause, namely the fire.
Fire
-causes->
Light hits my retina
-simultaniously-causes->
I believe there's a fire.
I believe that I seem to see a fire.
I believe that I seem to seem to see a fire.
...
This is not to deny that beliefs CAN cause beliefs though, as in the case of conscious, Sherlock-Holmes-style chains of inference. Also the absence of certain beliefs might be necessary for the production of other beliefs (e.g. the absence of the belief that I have taken fire-hallucination causing drugs, might be required for causal stimulation by light from a fire to cause me to form the belief that there's a fire)
Thursday, October 15, 2009
Conventionalim and Realism - are they incompatible?
Conventionalism and Realism are often presented as alternatives (for example, I recently heard a talk about whether Frege should be understood as a realist or a conventionalist about number). But (at least on my own best understanding of what `conventionalism' might be) it's not at all clear that this is the case.
I'm tempted to understand realism and conventionalism as follows, in which case (I am going to argue) the two are perfectly compatible.
You are a realist about Xs iff you think there really are some Xs.
You are a conventionalist about Xs iff you think that we can reasonably address boundary disputes about just what is to count as an X, or what properties Xs are supposed to have by imposing arbitrary conventions.
Here's an example. I think there really are living things. But I don't think the distinction between living and non-living things is such an incredibly natural kind that much would be lost by stipulating some slight re-definition of "alive" that clearly entails viruses are/aren't "alive". Hence, (by the above definition) I'm both a realist and a conventionalist about living things.
Maybe compatibility between realism about Xs and conventionalism about certain facts about Xs only applies conventionalism with regard to tiny boundary disputes about the extension of the concept X? But here's another example where the extension of X will be completely different depending on what stipulation we make.
I'm a realist about human bodies, in that I think that there are indeed human bodies. But should human bodies be identified with *open* or *closed* sets of space time points? This issue, is (just like the viruses question above) one that it seems perfectly natural to settle by stipulation.
Thus, I don't buy the argument that Frege's willingness to allow some questions about what the numbers are to be determined by convention (assuming, as the speaker suggested, he was indeed so willing) shows that he's an anti-realist about about number in anything like the ordinary sense of the term.
[edit: To put the point another way - you can be a realist about the all items that potentially count as numbers but think it's vague which things exactly do count as numbers.
Taking the extension of a concept to be somewhat arbitrary/conventional doesn't require thinking that the objects which are candidates to fall under that concept are somehow unreal]
I'm tempted to understand realism and conventionalism as follows, in which case (I am going to argue) the two are perfectly compatible.
You are a realist about Xs iff you think there really are some Xs.
You are a conventionalist about Xs iff you think that we can reasonably address boundary disputes about just what is to count as an X, or what properties Xs are supposed to have by imposing arbitrary conventions.
Here's an example. I think there really are living things. But I don't think the distinction between living and non-living things is such an incredibly natural kind that much would be lost by stipulating some slight re-definition of "alive" that clearly entails viruses are/aren't "alive". Hence, (by the above definition) I'm both a realist and a conventionalist about living things.
Maybe compatibility between realism about Xs and conventionalism about certain facts about Xs only applies conventionalism with regard to tiny boundary disputes about the extension of the concept X? But here's another example where the extension of X will be completely different depending on what stipulation we make.
I'm a realist about human bodies, in that I think that there are indeed human bodies. But should human bodies be identified with *open* or *closed* sets of space time points? This issue, is (just like the viruses question above) one that it seems perfectly natural to settle by stipulation.
Thus, I don't buy the argument that Frege's willingness to allow some questions about what the numbers are to be determined by convention (assuming, as the speaker suggested, he was indeed so willing) shows that he's an anti-realist about about number in anything like the ordinary sense of the term.
[edit: To put the point another way - you can be a realist about the all items that potentially count as numbers but think it's vague which things exactly do count as numbers.
Taking the extension of a concept to be somewhat arbitrary/conventional doesn't require thinking that the objects which are candidates to fall under that concept are somehow unreal]
Subscribe to:
Posts (Atom)