This latest installment is about Nikloj Pedersen's recent Synthese article on Crispin Wright. Pedersen criticizes (correctly in my opinion) certain possible motivation for Wright's idea that we are entitled to assume certain "cornerstone" propositions (like 'I'm not a brain in a vat') just because assuming these is requisite for getting any substantative theory of a given area off the ground. (You can't just point out that accepting ~BIV leads you to have many and no fewer true beliefs than the skeptic if BIV is true, and no fewer true beliefs if BIV is false. For, avoiding false beliefs is presumably also epistemically important, and assuming ~BIV imposes a risk of having many more false beliefs)
Instead he proposes that such cornerstone assumptions have "teleological value" insofar as they are aimed at something of value (namely, true belief), whether or not they actually succeed in producing such true beliefs. But this seems to immediately generalize to all beliefs - not just cornerstone ones.
For, what beliefs aren't aimed at the truth? It's just as true of the person who assumes the existence of a massive conspiracy as of the person who assumes the existence of the external world that they aim at having many true beliefs. Indeed, many people would say that it's a necessary truth, part of what it means for something to be a belief, that in believing that P one is trying to believe the truth.
With the possible exception of cases like the millionaire who bribes you to believe some proposition, all beliefs would seem to aim at truth. Hence it seems that all beliefs inherit teleological justification in Pedersen's sense.
One might be able to make this into an interesting view - all beliefs (not just cornerstone ones) are warranted until one gets active reason to doubt them. Such a position is remeniscent of conservitivism and coherentism. But, from the article Pedersen shows no sign of intending to say that all beliefs are default justified.
Thursday, November 26, 2009
Wednesday, November 25, 2009
Crispin Wright and Rule Following
In the paper on Rule Following here, Wright suggests that good reasoning proceeds in obedience to with concrete rules, rules which we can in principle give at least a rule-circular justification for. I claim that Wright's view is only tenable, IF humanlike reasoners count as `obeying' infinitely many rules in this sense.
For, suppose a good reasoner only obeys finitely many such inference rules. And suppose, as Wright wants to claim that reasoner can come (by reasoning) to provide a rule circular justification for each such rule, (i.e. show that applying this rule cannot lead from truth to falsehood). But then our reasoner can combine all these finitely many justifications, to arrive at the conclusion that anything arrived at by applying some combination of these rules must be true. Hence, he can derive that the combination of these rules doesn't allow one to prove "0=1".
But remember that the rules are supposed to be concretely described. So, our reasoner can syntactically characterize the system which combines all these rules (the one which, unbenounced to him capthers all his good reasoning), and state Con(some formal system which allows exactly the inferences allowed by these rules). But he knows the combination of rules is consistent, so he can derive the con sentence for this set of rules. But, by incompleteness II (on the assumption that good reasoner's reasoning extends Robinson's Q, so that the theorem applies) this is impossible.
Hence, if Wright's theory about obedience to rules is correct, any good reasoner who accepts principles of reasoning that include Q (like us) must be obeying infinitely many rules.
[This may be problematic if one wants the notion of obedience to a rule to have some kind of psychological reality]
[Note that this doesn't mean the good reasoner's behavior won't be describable in some more efficient way by some finite collection of rules, just that the reasoner doesn't have access to these rules, in Wright's sense of being able to prove that they are truth preserving]
For, suppose a good reasoner only obeys finitely many such inference rules. And suppose, as Wright wants to claim that reasoner can come (by reasoning) to provide a rule circular justification for each such rule, (i.e. show that applying this rule cannot lead from truth to falsehood). But then our reasoner can combine all these finitely many justifications, to arrive at the conclusion that anything arrived at by applying some combination of these rules must be true. Hence, he can derive that the combination of these rules doesn't allow one to prove "0=1".
But remember that the rules are supposed to be concretely described. So, our reasoner can syntactically characterize the system which combines all these rules (the one which, unbenounced to him capthers all his good reasoning), and state Con(some formal system which allows exactly the inferences allowed by these rules). But he knows the combination of rules is consistent, so he can derive the con sentence for this set of rules. But, by incompleteness II (on the assumption that good reasoner's reasoning extends Robinson's Q, so that the theorem applies) this is impossible.
Hence, if Wright's theory about obedience to rules is correct, any good reasoner who accepts principles of reasoning that include Q (like us) must be obeying infinitely many rules.
[This may be problematic if one wants the notion of obedience to a rule to have some kind of psychological reality]
[Note that this doesn't mean the good reasoner's behavior won't be describable in some more efficient way by some finite collection of rules, just that the reasoner doesn't have access to these rules, in Wright's sense of being able to prove that they are truth preserving]
Use, Meaning and Number Theory
I like to joke that all philosophical questions in their clearest and most beautiful form philosophy of math. But I think this is actually true, in the case of questions about how much our use of a word has to "determine" the meaning of that word.
Consider the relationship between our use of the language of number theory, and the meaning of claims in this language.
I think the following two claims are about as uncontroversial as anything gets in philosophy:
a) The collection of sentences we are disposed to assert about the number theory (for any reasonable sense of the word disposition) is recursively enumerable.
b) The collection of truths of number theory is not. In particular there's a fact of the matter about all claims of the form "Every number has X recursively checkable property" e.g. whether fictionalism or platonism is the correct view about how to understand talk of the numbers, mathematicians are surely wondering about something when they ask "Are there infinitely many twin primes" (maybe something about what would have to be of any objects that had the structures which we take the numbers to have).
But what emerges from these two claims is a nice, and perhaps suprizing, picture of the relationship between use and meaning.
I use the words "all the numbers" in a way that (is r.e. and hence by Godel's theorem) only allows me to derive certain statements about the numbers. We can picture my reasoning about the numbers as what could be gotten by deriving things from a certain limited collection of axioms.
BUT in listing these limited collection of statements, I count as talking about some collection of objects/structure that objects could have. And, there are necessary truths about what those objects are like/what anything that has that structure must be like, which are not among the claims my use allows me to derive.
[If you're daunted by the mathematical example, here's another one inspired (oddly) by Wittgenstien on phil math. You use the words "bauhaus style" and "ornate" in a certain way, mostly to describe particular objects. This gives your words a meaning (though perhaps there is some vaguness). They would apply to some objects but not to others. Hence the question "Can any thing be both bauhaus style and ornate?" is either true false or perhaps indeterminate, if e.g. objects could be ornate in a way that makes it vague whether they are in the bauhaus style or not. But your use (e.g. your ability to say, when presented w/ one particular thing whether it is bauhaus style/ornate) does include anything which allows you to arrive at one answer to the question or another.]
So, there's a nice clear sense in which it strongly appears that: even if use determines meaning, facts about the meaning of our sentences can go beyond what our use of the words contained in them allows us to derive.
And, any philosophy that wishes to deny this claim will have to do quite alot to make itself more plausible than a) and b) above!
Consider the relationship between our use of the language of number theory, and the meaning of claims in this language.
I think the following two claims are about as uncontroversial as anything gets in philosophy:
a) The collection of sentences we are disposed to assert about the number theory (for any reasonable sense of the word disposition) is recursively enumerable.
b) The collection of truths of number theory is not. In particular there's a fact of the matter about all claims of the form "Every number has X recursively checkable property" e.g. whether fictionalism or platonism is the correct view about how to understand talk of the numbers, mathematicians are surely wondering about something when they ask "Are there infinitely many twin primes" (maybe something about what would have to be of any objects that had the structures which we take the numbers to have).
But what emerges from these two claims is a nice, and perhaps suprizing, picture of the relationship between use and meaning.
I use the words "all the numbers" in a way that (is r.e. and hence by Godel's theorem) only allows me to derive certain statements about the numbers. We can picture my reasoning about the numbers as what could be gotten by deriving things from a certain limited collection of axioms.
BUT in listing these limited collection of statements, I count as talking about some collection of objects/structure that objects could have. And, there are necessary truths about what those objects are like/what anything that has that structure must be like, which are not among the claims my use allows me to derive.
[If you're daunted by the mathematical example, here's another one inspired (oddly) by Wittgenstien on phil math. You use the words "bauhaus style" and "ornate" in a certain way, mostly to describe particular objects. This gives your words a meaning (though perhaps there is some vaguness). They would apply to some objects but not to others. Hence the question "Can any thing be both bauhaus style and ornate?" is either true false or perhaps indeterminate, if e.g. objects could be ornate in a way that makes it vague whether they are in the bauhaus style or not. But your use (e.g. your ability to say, when presented w/ one particular thing whether it is bauhaus style/ornate) does include anything which allows you to arrive at one answer to the question or another.]
So, there's a nice clear sense in which it strongly appears that: even if use determines meaning, facts about the meaning of our sentences can go beyond what our use of the words contained in them allows us to derive.
And, any philosophy that wishes to deny this claim will have to do quite alot to make itself more plausible than a) and b) above!
Saturday, November 21, 2009
Plenetudinous Platonism, Boolos and Completeness
Plenetuindous Platonism tries to resolve worries about access to mathematical objects by saying that there are mathematical objects corresponding to every "coherent theory".
The standard objection to this, based on a point by Boolos, is that if 'coherent' means first-order consistent, then this has to be false because there are first order consistent theories which are jointly inconsistent- but if 'coherent' doesn't mean first-order consistent, the notion is obscure.
I used to think this objection was pretty decisive, but I don't any more.
For, contrast the following two claims:
- TRUE: all consistent first-order theories have models in the universe of sets (completeness theorem)
- FALSE:all consistent first-order theories are true (Boolos point)
Which of these is relevant to the plenetudinous platonist?
What the plenetudinous platonist needs to say is that whichever kind of first-order consistent things we said about math, we would have expressed truths. But remember that quantifier restriction is totally ubiquitous in math and life (if someone says all the beers are in the fridge they don't mean all the beers in the universe, and if some past mathematician says there's no square root of -2, they may be best understood as not quantifying over a domain that includes the complex numbers).
So, what the plenetudinous platonist requires is that that every first order consistent theory comes out true for some suitable restriction of the domain of quantification, and interpretation of the non-logical primitives. And this is something the reductive platonist must agree with, because of the completeness theorem! The only difference is that the reductive platonist thinks there are models of these theories built out of sets, whereas the plenetudinous platonist thinks there's a structure of fundamental mathematical objects corresponding to each such theory.
Thus, plentudinous platonism's ontological commitments can be stated pretty crisply, as in the bold section above. And there's nothing inconsistent or about these commitments, unless normal set theory is inconsistent as well!
The standard objection to this, based on a point by Boolos, is that if 'coherent' means first-order consistent, then this has to be false because there are first order consistent theories which are jointly inconsistent- but if 'coherent' doesn't mean first-order consistent, the notion is obscure.
I used to think this objection was pretty decisive, but I don't any more.
For, contrast the following two claims:
- TRUE: all consistent first-order theories have models in the universe of sets (completeness theorem)
- FALSE:all consistent first-order theories are true (Boolos point)
Which of these is relevant to the plenetudinous platonist?
What the plenetudinous platonist needs to say is that whichever kind of first-order consistent things we said about math, we would have expressed truths. But remember that quantifier restriction is totally ubiquitous in math and life (if someone says all the beers are in the fridge they don't mean all the beers in the universe, and if some past mathematician says there's no square root of -2, they may be best understood as not quantifying over a domain that includes the complex numbers).
So, what the plenetudinous platonist requires is that that every first order consistent theory comes out true for some suitable restriction of the domain of quantification, and interpretation of the non-logical primitives. And this is something the reductive platonist must agree with, because of the completeness theorem! The only difference is that the reductive platonist thinks there are models of these theories built out of sets, whereas the plenetudinous platonist thinks there's a structure of fundamental mathematical objects corresponding to each such theory.
Thus, plentudinous platonism's ontological commitments can be stated pretty crisply, as in the bold section above. And there's nothing inconsistent or about these commitments, unless normal set theory is inconsistent as well!
Rabbits
Causal contact with rabbits seems to be involved in almost exactly the same way in the following two statements:
RH "There's a rabbit"
MP "The mereiological complement of rabbithood is perforated here" (Or, for short: "The Rabcomp is perf")
I mean, light bouncing off rabbits and hitting our eyes would seem to be what causes (assent to) both sentences.
Thus: if we try to say that RH refers to rabbits because assertions of it are typically caused by rabbits, we would (it seems!) also get the false result that MP refers to rabbits.
[Thus causal contact doesn't seem to be what does the work in resolving Quinean reference indetermenacy - which makes things look hopeful for the view that reference in mathematics can be as determinate as reference anywhere else.]
RH "There's a rabbit"
MP "The mereiological complement of rabbithood is perforated here" (Or, for short: "The Rabcomp is perf")
I mean, light bouncing off rabbits and hitting our eyes would seem to be what causes (assent to) both sentences.
Thus: if we try to say that RH refers to rabbits because assertions of it are typically caused by rabbits, we would (it seems!) also get the false result that MP refers to rabbits.
[Thus causal contact doesn't seem to be what does the work in resolving Quinean reference indetermenacy - which makes things look hopeful for the view that reference in mathematics can be as determinate as reference anywhere else.]
Friday, November 20, 2009
Speed Up Theorem and External Evidence
It's been suggested (e.g. by Pen Maddy, Philip Kitcher, and possibly by my advisor PK) that we can get some 'external evidence' for the truth of mathematical statements which are independent of our axioms, by noticing that they allow us to prove things which we already know to be true (because we can prove them directly from our axioms) much more quickly.
However, Godel's Speed Up Theorem seems to show that ANY genuine strengthening of our axioms would have this property. I quote from a presentation by Peter Smith:
"If T is nice theory, and γ is some sentence such
that neither T
⊢ γ nor T ⊢ ¬γ. Then the theory T + γ got
by adding γ as a new axiom exhibits ultra speed-up over T"
"Nice" here means all the hypotheses needed for Godel's theorem to apply to a theory, and "ultra speed up" means that for any recursive function, putatively limiting how much adding γ can speed up a proof, there's some sentence x whose proof gets sped up by more than f(x) when you add γ to your theory T.
Smith just points out that we shouldn't be surprised by historical examples of proofs using complex numbers of set theory to prove things about arithmetic.
But doesn't this theorem also raise serious problems for taking observed instances of speed up to be evidence for the truth of a potential new axiom γ ?
However, Godel's Speed Up Theorem seems to show that ANY genuine strengthening of our axioms would have this property. I quote from a presentation by Peter Smith:
"If T is nice theory, and γ is some sentence such
that neither T
⊢ γ nor T ⊢ ¬γ. Then the theory T + γ got
by adding γ as a new axiom exhibits ultra speed-up over T"
"Nice" here means all the hypotheses needed for Godel's theorem to apply to a theory, and "ultra speed up" means that for any recursive function, putatively limiting how much adding γ can speed up a proof, there's some sentence x whose proof gets sped up by more than f(x) when you add γ to your theory T.
Smith just points out that we shouldn't be surprised by historical examples of proofs using complex numbers of set theory to prove things about arithmetic.
But doesn't this theorem also raise serious problems for taking observed instances of speed up to be evidence for the truth of a potential new axiom γ ?
More Davidson Obsession
In their book on Davidson, Lepore and Ludwig suggest that when davidson says an expression E is a semantic primitive if "the 'rules which give the meaning for the sentences in which it does not appear, do not suffice to determine the meaning of sentences in which it does appear'", he means that:"someone who knows [these rules for how to use all sentences not containing E] is not thereby in a position to understand" sentences containing E.
Intuitively, I presume the idea is supposed to be something like this: "big cat" is not a semantic primitive, since you could learn its use just by hearing expressions like "big dog" and "orange cat" but "cat" is a primitive, since you wouldn't be able to understand this expression without previous exposure to sentences containing it.
However, I think this definition turns out to be rather problematic.
Firstly, by 'rules' Lepore and Ludwig later clarify that they don't mean consciously posited rules which we might have "propositional knowledge" of. So they don't mean something like "i before e, except after c". Rather, the relevant rules are supposed to be tacit, or unconscious.
So it seems like we can restate the criterion by saying something like:
E is a semantic primitive iff merely learning how to use expressions that don't contain E doesn't put one in a position to understand the use of E.
But now here's the problem.
-If "being in a position to understand" the use of E means being able to logically derive facts about the use of E then all words are semantic primitives. There's nothing logically impossible about a language in which there happens to be a special exception where, where by combine "big" and "cat" this means hyena rather than big cat.
- On the other hand, if "being in a position to understand" the use of E means being likely to use E correctly, this is a fact about about the relationship between a language and varying aspects of human psychology.
Here's what I mean:
Model someone learning a language as having a prior probability distribution over all possible functions pairing up sentences of a language they know with propositions, and then reacting to experience by ruling out certain interpretation functions, when they fail to square with the observed behavior of people who speak the relevant language. On this model, theories like Chomskian linguistics amount to saying that babies assign 0 prior probability to certain regions of the space of possible languages.
We can imagine a contiuum of logically possible distributions of prior probability, ranging from the foolhardy tourist who assumes that everyone is speaking English until given strong behavioral evidence against to the poet who feels sure he knows that a "fat sound" is the very first time he hears fat applied to things other than physical objects, to the anxious nerd who asks for examples of "fat" vs. "thin" sounds, to they hyperparainoid person who worries about the possibility that the combination of "fat" and "cat" might fail to mean a cat that's fat, just as the combination of "toy" and "soldier" fails to mean a soldier that's a toy.
Presumably actual (sane) people won't differ too much in their linguistic priors. [Though I wouldn't be surprized if babies and adults differed radically in this regard.]
But notice that being a semantic primitive turns out to have nearly nothing to do with the role of a word in a language. Rather it has to do with our cautious or uncautious tendency to extend examples of verbal behavior in one way rather than another. For the foolhardy tourist no English words are semantically primitive (on hearing a single word he comes to understand everything in one swoop) whereas all expressions are semantically primitive for the hyperparanoid person. Two people could learn the same language, and a word would be a semantic primitive for one of them, but not for the other.
Thus, so far as I can tell, the notion of 'semantic primitive' is incorrectly, or inadequately, defined for Davidson's purposes.
There's no limit to how complex a language a finite creature could "learn" on the basis of even a single observation. Whatever pattern of brainstates and behaviors suffice for counting as understanding the language, we can imagine a creature could just start out with a disposition to immediately form those, if it ever hears the sound "red". The only real limit on complexity of languages has nothing to do with learning, but rather with the complexity of the kind of behavior which competence with a given language would require. Our finite brains need to be able to produce behavior that suffices for the attribution of understanding of the relevant language.
Thus, I think, the claim that all human learnable languages have to have only finitely many 'semantic primitives' adds nothing but giant heaps of philosophical confusion and tortured metaphor to the (comparatively) clear and obvious claim that there have to be relatively short programs capable of passing the Turing test.
Intuitively, I presume the idea is supposed to be something like this: "big cat" is not a semantic primitive, since you could learn its use just by hearing expressions like "big dog" and "orange cat" but "cat" is a primitive, since you wouldn't be able to understand this expression without previous exposure to sentences containing it.
However, I think this definition turns out to be rather problematic.
Firstly, by 'rules' Lepore and Ludwig later clarify that they don't mean consciously posited rules which we might have "propositional knowledge" of. So they don't mean something like "i before e, except after c". Rather, the relevant rules are supposed to be tacit, or unconscious.
So it seems like we can restate the criterion by saying something like:
E is a semantic primitive iff merely learning how to use expressions that don't contain E doesn't put one in a position to understand the use of E.
But now here's the problem.
-If "being in a position to understand" the use of E means being able to logically derive facts about the use of E then all words are semantic primitives. There's nothing logically impossible about a language in which there happens to be a special exception where, where by combine "big" and "cat" this means hyena rather than big cat.
- On the other hand, if "being in a position to understand" the use of E means being likely to use E correctly, this is a fact about about the relationship between a language and varying aspects of human psychology.
Here's what I mean:
Model someone learning a language as having a prior probability distribution over all possible functions pairing up sentences of a language they know with propositions, and then reacting to experience by ruling out certain interpretation functions, when they fail to square with the observed behavior of people who speak the relevant language. On this model, theories like Chomskian linguistics amount to saying that babies assign 0 prior probability to certain regions of the space of possible languages.
We can imagine a contiuum of logically possible distributions of prior probability, ranging from the foolhardy tourist who assumes that everyone is speaking English until given strong behavioral evidence against to the poet who feels sure he knows that a "fat sound" is the very first time he hears fat applied to things other than physical objects, to the anxious nerd who asks for examples of "fat" vs. "thin" sounds, to they hyperparainoid person who worries about the possibility that the combination of "fat" and "cat" might fail to mean a cat that's fat, just as the combination of "toy" and "soldier" fails to mean a soldier that's a toy.
Presumably actual (sane) people won't differ too much in their linguistic priors. [Though I wouldn't be surprized if babies and adults differed radically in this regard.]
But notice that being a semantic primitive turns out to have nearly nothing to do with the role of a word in a language. Rather it has to do with our cautious or uncautious tendency to extend examples of verbal behavior in one way rather than another. For the foolhardy tourist no English words are semantically primitive (on hearing a single word he comes to understand everything in one swoop) whereas all expressions are semantically primitive for the hyperparanoid person. Two people could learn the same language, and a word would be a semantic primitive for one of them, but not for the other.
Thus, so far as I can tell, the notion of 'semantic primitive' is incorrectly, or inadequately, defined for Davidson's purposes.
There's no limit to how complex a language a finite creature could "learn" on the basis of even a single observation. Whatever pattern of brainstates and behaviors suffice for counting as understanding the language, we can imagine a creature could just start out with a disposition to immediately form those, if it ever hears the sound "red". The only real limit on complexity of languages has nothing to do with learning, but rather with the complexity of the kind of behavior which competence with a given language would require. Our finite brains need to be able to produce behavior that suffices for the attribution of understanding of the relevant language.
Thus, I think, the claim that all human learnable languages have to have only finitely many 'semantic primitives' adds nothing but giant heaps of philosophical confusion and tortured metaphor to the (comparatively) clear and obvious claim that there have to be relatively short programs capable of passing the Turing test.
Wednesday, November 18, 2009
bookclub: Gareth Evans on Semantics + Tacit Knowledge I
I just discovered Gareth Evans has a neat article (probably a classic) about the very issues of what a semantic theory is supposed to do, which I've been worrying about recently. I found it so interesting that I'll probably write a few posts about different issues in this article.
The article starts out by paraphrasing Crispin Wright, to the following effect:
If philosophers are trying to state what it takes for sentences in English to be true, there's a very simple schema '"S" is true iff S' (this is called Tarski's T schema) which immediately gives correct truth conditions for all English sentences.
But obviously, when philosophers try to give semantic theories they aren't satisfied with just doing this. So what is the task of formal semantics about?
I think this is a great question. When I first read it I thought:
Perhaps what we want to do is notice systematic relationships between the truth conditions for different sentences in English e.g. whenever "it is raining" is true "it is not the case that it is raining" is false. If you want to make this sound fancy, you could call it noticing which syntactic patterns (e.g. sentence A being the result of sticking "it is not the case that" on to the front of sentence B) echo interesting semantic properties (e.g. sentence A having the opposite truth value from sentence B).
However, I would call this endeavor the study of logic, rather than semantics. So far we have logical theories that help us spot patterns in how words like "and" and "there is" (and perhaps "necessarily") effect the truth conditions for sentences they figure in. There may be similar patterns to notice for other words as well (e.g. color attributions - something can be both red and scarlet but not both red and green) and one could develop a logic for each of these.
We aren't saying what "and" means (presumably if we are in a position to even try to give a logic for English expressions we already know that "and" means and), rather we are discovering systematic patterns in the truth conditions for different sentences containing "and".
So, rule one other thing off the list.
Instead, Wright suggests (and Evans seems to allow) that semantics to go beyond trivially stating the truth conditions for English sentences by "figuring in an explanation or the speaker's capacity to understand new sentences". (I am quoting from Evans, but both deplore the vaguness of this statement).
This sounds initially plausible to me, but it raises a question:
Once we have noticed that attributions of meaning don't require anything deeper than the kinds of systematic patterns of interactions with the world displayed by wittgenstein's builders (maybe with some requirement that these interactions be produced by something that doesn't look like ned block's giant look-up table), the question of how human beings actually manage to produces such behavior seems to be a purely scientific question.
There are just neuroscientific facts, about a) how the relevant alterations of behavior (corresponding e.g. to learning the word "slab") are produced when a baby's brain is exposed to a suitable combination of sensory inputs and b) what algorithm most elegantly describes/models this process.
So, what's the deal with philosopher trying to do semantics? And what does it take for an algorithm to model a brain process better or worse? I'll try to get more clear on these questions, and what Evans would say about them, in the next post.
The article starts out by paraphrasing Crispin Wright, to the following effect:
If philosophers are trying to state what it takes for sentences in English to be true, there's a very simple schema '"S" is true iff S' (this is called Tarski's T schema) which immediately gives correct truth conditions for all English sentences.
But obviously, when philosophers try to give semantic theories they aren't satisfied with just doing this. So what is the task of formal semantics about?
I think this is a great question. When I first read it I thought:
Perhaps what we want to do is notice systematic relationships between the truth conditions for different sentences in English e.g. whenever "it is raining" is true "it is not the case that it is raining" is false. If you want to make this sound fancy, you could call it noticing which syntactic patterns (e.g. sentence A being the result of sticking "it is not the case that" on to the front of sentence B) echo interesting semantic properties (e.g. sentence A having the opposite truth value from sentence B).
However, I would call this endeavor the study of logic, rather than semantics. So far we have logical theories that help us spot patterns in how words like "and" and "there is" (and perhaps "necessarily") effect the truth conditions for sentences they figure in. There may be similar patterns to notice for other words as well (e.g. color attributions - something can be both red and scarlet but not both red and green) and one could develop a logic for each of these.
We aren't saying what "and" means (presumably if we are in a position to even try to give a logic for English expressions we already know that "and" means and), rather we are discovering systematic patterns in the truth conditions for different sentences containing "and".
So, rule one other thing off the list.
Instead, Wright suggests (and Evans seems to allow) that semantics to go beyond trivially stating the truth conditions for English sentences by "figuring in an explanation or the speaker's capacity to understand new sentences". (I am quoting from Evans, but both deplore the vaguness of this statement).
This sounds initially plausible to me, but it raises a question:
Once we have noticed that attributions of meaning don't require anything deeper than the kinds of systematic patterns of interactions with the world displayed by wittgenstein's builders (maybe with some requirement that these interactions be produced by something that doesn't look like ned block's giant look-up table), the question of how human beings actually manage to produces such behavior seems to be a purely scientific question.
There are just neuroscientific facts, about a) how the relevant alterations of behavior (corresponding e.g. to learning the word "slab") are produced when a baby's brain is exposed to a suitable combination of sensory inputs and b) what algorithm most elegantly describes/models this process.
So, what's the deal with philosopher trying to do semantics? And what does it take for an algorithm to model a brain process better or worse? I'll try to get more clear on these questions, and what Evans would say about them, in the next post.
Labels:
bookclub,
philosophy of language,
philosophy of mind
Saturday, November 14, 2009
Practical Helpfulness: Why Care?
Readers of the last two posts may well be wondering why I'm going on so much about the "practical helpfulness" of mathematics.
One thing is, I wish I had a better name for it than "practical helpfulness", so maybe someone will suggest one :).
More seriously, I think the fact that our mathematical methods are (in effect) constantly making predictions about themselves, and other kinds of a priori reasoning - not to mention combining with our methods of observation to yield predictions that observation alone would not have yielded (see the computer example) has two important consequences.
Firstly, it shows that our reasoning about math is NOT the kind of thing you are likely to get just by making a series of arbitrary stipulations and sticking to them. All our different kinds of a priori reasoning (methods for counting abstract objects, logical inference, arithmetic, intuitive principles of number theory, set theoretic reasoning that has consequences for number theory) fit together in an incredibly intricate way. Each method of reasoning has myriad opportunities to yield consequences that would lead us to form false expectations about the results of applying some other method. And yet, this almost never happens!
Thus, there's a question about how we could have managed to get methods of armchair reasoning that fit together so beautifully. Some would posit a benevolent god, designing our minds to reason only in ways that are truth-preserving and hence coherent in this sense. But I think a process of free creativity to come up with new methods of a priori reasoning, plus Quinean/Millian revision when these new elements did raise false expectations, can do the job. This brings us to the second point.
Secondly, if we think about all these intended internal and external applications as forming part of our conception of which mathematical objects we mean when we talk about e.g. the numbers, then Qunian/Millian revision when applications go wrong will amount to a kind of reliable feedback mechanism, maintaining and improving the fit between what we say about "the numbers" and what's actually true of those-mathematical objects-whose-structure-mirrors-the-modal-facts-about-how-many-objects-there-are-when-there-are-n-Fs-and-m-(distinct)-Gs etc.
One thing is, I wish I had a better name for it than "practical helpfulness", so maybe someone will suggest one :).
More seriously, I think the fact that our mathematical methods are (in effect) constantly making predictions about themselves, and other kinds of a priori reasoning - not to mention combining with our methods of observation to yield predictions that observation alone would not have yielded (see the computer example) has two important consequences.
Firstly, it shows that our reasoning about math is NOT the kind of thing you are likely to get just by making a series of arbitrary stipulations and sticking to them. All our different kinds of a priori reasoning (methods for counting abstract objects, logical inference, arithmetic, intuitive principles of number theory, set theoretic reasoning that has consequences for number theory) fit together in an incredibly intricate way. Each method of reasoning has myriad opportunities to yield consequences that would lead us to form false expectations about the results of applying some other method. And yet, this almost never happens!
Thus, there's a question about how we could have managed to get methods of armchair reasoning that fit together so beautifully. Some would posit a benevolent god, designing our minds to reason only in ways that are truth-preserving and hence coherent in this sense. But I think a process of free creativity to come up with new methods of a priori reasoning, plus Quinean/Millian revision when these new elements did raise false expectations, can do the job. This brings us to the second point.
Secondly, if we think about all these intended internal and external applications as forming part of our conception of which mathematical objects we mean when we talk about e.g. the numbers, then Qunian/Millian revision when applications go wrong will amount to a kind of reliable feedback mechanism, maintaining and improving the fit between what we say about "the numbers" and what's actually true of those-mathematical objects-whose-structure-mirrors-the-modal-facts-about-how-many-objects-there-are-when-there-are-n-Fs-and-m-(distinct)-Gs etc.
Examples
In my last post, I proposed that that our methods of reasoning about math are "practically helpful", in (at least) the sense that they act as reliable shortcuts. Mathematical reasoning leads us to form correct expectations about (and hence potentially act on) the results of various processes of observation and/or inference, without going through these processes.
Now I'm going to give some more interesting examples of (our methods of reasoning about) mathematics being practically helpful to us in this way.
The general structure in all these is the same: Composing a process of mathematical reasoning M with some other reasoning processes A yields a result that's (nearly always) the same one you'd get by going through a different process B.
Examples:
1. Observe computer (wiring looks solid, seems to be running program p etc.), derive that program it's running doesn't halt, expect it to still be running after first 1/2hour <--> observe computer after 1/2 hour
2. Observe cannonballs, form general belief about trajectory of ball launched at various angles, observe angle of launch, derive where trajectory lands <---> measure where this ball does land.
3. Prove a general statement, expect 177 not to be a counterexample <---> (directly) check whether 177 is a counterexample.
4. Conclude that some system formalizes valid reasoning about some math truths, expect that you aren't looking at an inscription of a proof of ``0=1'' in that system <---> check what you have to see if it's an inscription a proof in the system, if it ends in ``0=1''.
5. Count male rhymes in poem, count female rhymes, then add <---> Count total rhymes
[Special Case Study: Number Theory
If we focus on the case of reasoning about the numbers, we can see that there's a nice structure of mathematics creating correct expectations about mathematics which creates correct expectations about mathematics, which creates correct expectations about the word.
- general reasoning about the numbers: Ax Ay Az ((x+y)+z) = (x+(y+z))
- calculations of particular sums: 22+23=45
- assertions of modal intuition: whenever there are 2 apples and 2 oranges the must are 4 fruit
- counting procedures: there are two ``e"s in ``there''
Note that each of the above procedures allows us to correctly anticipate certain results of applying the procedure below it. ]
Now I'm going to give some more interesting examples of (our methods of reasoning about) mathematics being practically helpful to us in this way.
The general structure in all these is the same: Composing a process of mathematical reasoning M with some other reasoning processes A yields a result that's (nearly always) the same one you'd get by going through a different process B.
Examples:
1. Observe computer (wiring looks solid, seems to be running program p etc.), derive that program it's running doesn't halt, expect it to still be running after first 1/2hour <--> observe computer after 1/2 hour
2. Observe cannonballs, form general belief about trajectory of ball launched at various angles, observe angle of launch, derive where trajectory lands <---> measure where this ball does land.
3. Prove a general statement, expect 177 not to be a counterexample <---> (directly) check whether 177 is a counterexample.
4. Conclude that some system formalizes valid reasoning about some math truths, expect that you aren't looking at an inscription of a proof of ``0=1'' in that system <---> check what you have to see if it's an inscription a proof in the system, if it ends in ``0=1''.
5. Count male rhymes in poem, count female rhymes, then add <---> Count total rhymes
[Special Case Study: Number Theory
If we focus on the case of reasoning about the numbers, we can see that there's a nice structure of mathematics creating correct expectations about mathematics which creates correct expectations about mathematics, which creates correct expectations about the word.
- general reasoning about the numbers: Ax Ay Az ((x+y)+z) = (x+(y+z))
- calculations of particular sums: 22+23=45
- assertions of modal intuition: whenever there are 2 apples and 2 oranges the must are 4 fruit
- counting procedures: there are two ``e"s in ``there''
Note that each of the above procedures allows us to correctly anticipate certain results of applying the procedure below it. ]
How (Our Beliefs About) Math are Practically Helpful
All philosophers of math will agree that people do something they call "math", and that this activity is practically helpful, in a certain sense. This is often put pretty loosely by saying `Math helps us build bridges that stand up'. But I think we can say something much clearer than that. Here goes:
Our grasp of math (such as it is) has at least three aspects:
- We can follow proofs. You will accept certain kinds of transitions from one mathematical sentence to another, (or between mathematical sentences and non-mathematical ones) when these are suggested to you.
- We can come up with proofs. You have a certain probability of coming up with chains of inference like this on your own.
- Proofs can create expectations in us. Accepting certain sentences makes you disposed to react with surprise and dismay should you come to accept other sentences. e.g. if you accept "n is prime" you will react with surprise and dismay to a situation where you are also inclined to accept "n has p, q, and r as factors".
Now, the sense in which our mathematical practices are helpful is this:
First, our reasoning about math fits into our overall web of beliefs in such a way as to create additional expectations. Here's what I have in mind: Fix a situation. People in that situation who realize their dispositions to make/accept mathematical inferences arrive in a state where they will be surprised by more things than those in the same situation who don't.
For example, plonk a bunch of people down in front of a bowl of red and yellow lentils. Make each person count the red lentils and the yellow lentils. Now give them some tasty sandwiches and half an hour. Some of the people will add the two numbers. Others will just eat their sandwitches. Now, note that the people ho did the math have formed extra expectations, in the following sense. If we now have our subjects count the lentils all together, the people who did the sum will be surprised if they get anything but one particular number, whereas those who didn't do the math will only be surprised if they get anything outside of a certain given range.
Secondly, the extra expectations raised by doing math are very very often correct. When doing mathematical reasoning about your situation puts you in a state where (now) you'd be surprised if a certain observation/reasoning yields anything but P, applying this process tends to yield P. (This is especially true if we weight the satisfaction/dissatisfaction of strong expectations more heavily). Thus, composing a process of mathematical reasoning M with some other reasoning processes A yields nearly always yields correct expectations about the result of going through a different process B, if it yields any expectations at all.
And finally, this is (potentially) helpful, because it means not only do we acquire the disposition to be surprised if B yields something different, but any further inferences/actions which would get triggered by doing B happen immediately after doing A and M without having to wait for B to take place. For example, in the case from the previous post: if we imagine that all of our sample population have inductively associated counting 1567 lentils in total with having enough to make soup, the people who did the addition after counting the lentils separately, start cooking earlier than those who did something else instead.
To summarize:
Doing math is practically helpful in the sense that spending time doing math raises extra expectations (relative to spending that time eating sandwiches) about the results of certain other processes, and these expectations are generally correct. Thus, mathematical reasoning constitutes a reliable shortcut, leading us to take whatever actions would be triggered by going through some other process B without actually going through B.
NOTE: I don't mean to suggest that this is all there is to math, or that math is somehow *merely* instrumental. I'm just trying to concretely state some data about the successful "applications" of math, which I think everyone will agree to.
Our grasp of math (such as it is) has at least three aspects:
- We can follow proofs. You will accept certain kinds of transitions from one mathematical sentence to another, (or between mathematical sentences and non-mathematical ones) when these are suggested to you.
- We can come up with proofs. You have a certain probability of coming up with chains of inference like this on your own.
- Proofs can create expectations in us. Accepting certain sentences makes you disposed to react with surprise and dismay should you come to accept other sentences. e.g. if you accept "n is prime" you will react with surprise and dismay to a situation where you are also inclined to accept "n has p, q, and r as factors".
Now, the sense in which our mathematical practices are helpful is this:
First, our reasoning about math fits into our overall web of beliefs in such a way as to create additional expectations. Here's what I have in mind: Fix a situation. People in that situation who realize their dispositions to make/accept mathematical inferences arrive in a state where they will be surprised by more things than those in the same situation who don't.
For example, plonk a bunch of people down in front of a bowl of red and yellow lentils. Make each person count the red lentils and the yellow lentils. Now give them some tasty sandwiches and half an hour. Some of the people will add the two numbers. Others will just eat their sandwitches. Now, note that the people ho did the math have formed extra expectations, in the following sense. If we now have our subjects count the lentils all together, the people who did the sum will be surprised if they get anything but one particular number, whereas those who didn't do the math will only be surprised if they get anything outside of a certain given range.
Secondly, the extra expectations raised by doing math are very very often correct. When doing mathematical reasoning about your situation puts you in a state where (now) you'd be surprised if a certain observation/reasoning yields anything but P, applying this process tends to yield P. (This is especially true if we weight the satisfaction/dissatisfaction of strong expectations more heavily). Thus, composing a process of mathematical reasoning M with some other reasoning processes A yields nearly always yields correct expectations about the result of going through a different process B, if it yields any expectations at all.
And finally, this is (potentially) helpful, because it means not only do we acquire the disposition to be surprised if B yields something different, but any further inferences/actions which would get triggered by doing B happen immediately after doing A and M without having to wait for B to take place. For example, in the case from the previous post: if we imagine that all of our sample population have inductively associated counting 1567 lentils in total with having enough to make soup, the people who did the addition after counting the lentils separately, start cooking earlier than those who did something else instead.
To summarize:
Doing math is practically helpful in the sense that spending time doing math raises extra expectations (relative to spending that time eating sandwiches) about the results of certain other processes, and these expectations are generally correct. Thus, mathematical reasoning constitutes a reliable shortcut, leading us to take whatever actions would be triggered by going through some other process B without actually going through B.
NOTE: I don't mean to suggest that this is all there is to math, or that math is somehow *merely* instrumental. I'm just trying to concretely state some data about the successful "applications" of math, which I think everyone will agree to.
Analyticity: No Free Lunch
Consider the following pardigmatic examples of analytic and synthetic sentences:
(1) "Dogs once existed."
(2) "Prime numbers have two distinct divisors: themselves and one."
Both of these statements feel extremely obvious to us. And, if anything we're more likely to stop asserting (2) than (1) - if some perverse person wants to count 1 as a "prime" number, that's fine with me, so (if he's insistent enough) I'll adopt his usage and hence stop saying sentence 2 (and e.g. change how I state the fundamental theorem of algebra accordingly). So - we wonder, after reading Quine, what does the further claim that (2) is analytic amount to?
Here's an idea: If someone asked me to back up my assertion of (1), I'd be surprised, but there are things I would do to support this e.g. give an example of a dog. If (bizzarely) I couldn't state any other claims in support of (1), I'd be troubled. In contrast, if asked to justify (2) I wouldn't be able to give any kind of argument for it AND I wouldn't be troubled by this, or inclined to revise. (Note: this is exactly when claims about analyticity and meaning come up in ordinary contexts - people say 'that's just what I mean by the term' when faced with skepticism about certain things.)
S is basic analytic in P's idiolect iff: either P is happy to accept S without being able to provide any further justification
S is analytic in P's idiolect iff: S is basic analytic S is derivable via some combination of premises and inferences, each of which is basic analytic.
This seems to pick out a relatively sharp class of sentences, and accord with our intuitive judgments of analyticity (at least if we assume that experience can somehow be cited as a justification [or something more sophisticated], so that direct observations don't count as analytic for the observer).
Does this refute Quine? No. For, let's think about what epistemological siginificance (this notion) analyticity has. Do we have some kind of special access to analytic truths?
Making a bunch of new sentences analytic in your idiolect is just a matter of developing the inclined to say "that's just what I mean by the word" when pressed for a justification of these sentences. And this refusal to provide extra justification doesn't somehow ensure that the sentences you assert so boldly come to express truths.
For, what bucking up your insouciance like this does, is change the facts about your use of words so that (now), if the certain of your words are meaningful at all, these sentences will express truths. Thus, it makes these sentences/inferences function as a kind of implicit definition of your terms. But, as the famous case of Tonk shows, not all implicit definitions are coherent. Also, in changing the meanings of your words in this way, you run the risk of making other non-analytic sentences that you currently accept now express falsehoods.
Thus, saying that some sentence S is analytic isn't some kind of epistemic free pass for you to accept that sentence. All it does is semantically push all your chips into the center of the table with regard to S. Whereas before you ran the risk that S would express a falsehood, now there's a better chance that S will express a truth, but if it doesn't both S and a bunch of other sentences in your language will be totally meaningless.
So, here's my current position: the analytic-synthetic distinction is real, but it doesn't give the epistemological free lunch* which the logical positivists hoped it would.
*i.e. just saying that facts about something (like math) is analytic doesn't banish mysteries about how we came to know these facts.
(1) "Dogs once existed."
(2) "Prime numbers have two distinct divisors: themselves and one."
Both of these statements feel extremely obvious to us. And, if anything we're more likely to stop asserting (2) than (1) - if some perverse person wants to count 1 as a "prime" number, that's fine with me, so (if he's insistent enough) I'll adopt his usage and hence stop saying sentence 2 (and e.g. change how I state the fundamental theorem of algebra accordingly). So - we wonder, after reading Quine, what does the further claim that (2) is analytic amount to?
Here's an idea: If someone asked me to back up my assertion of (1), I'd be surprised, but there are things I would do to support this e.g. give an example of a dog. If (bizzarely) I couldn't state any other claims in support of (1), I'd be troubled. In contrast, if asked to justify (2) I wouldn't be able to give any kind of argument for it AND I wouldn't be troubled by this, or inclined to revise. (Note: this is exactly when claims about analyticity and meaning come up in ordinary contexts - people say 'that's just what I mean by the term' when faced with skepticism about certain things.)
S is basic analytic in P's idiolect iff: either P is happy to accept S without being able to provide any further justification
S is analytic in P's idiolect iff: S is basic analytic S is derivable via some combination of premises and inferences, each of which is basic analytic.
This seems to pick out a relatively sharp class of sentences, and accord with our intuitive judgments of analyticity (at least if we assume that experience can somehow be cited as a justification [or something more sophisticated], so that direct observations don't count as analytic for the observer).
Does this refute Quine? No. For, let's think about what epistemological siginificance (this notion) analyticity has. Do we have some kind of special access to analytic truths?
Making a bunch of new sentences analytic in your idiolect is just a matter of developing the inclined to say "that's just what I mean by the word" when pressed for a justification of these sentences. And this refusal to provide extra justification doesn't somehow ensure that the sentences you assert so boldly come to express truths.
For, what bucking up your insouciance like this does, is change the facts about your use of words so that (now), if the certain of your words are meaningful at all, these sentences will express truths. Thus, it makes these sentences/inferences function as a kind of implicit definition of your terms. But, as the famous case of Tonk shows, not all implicit definitions are coherent. Also, in changing the meanings of your words in this way, you run the risk of making other non-analytic sentences that you currently accept now express falsehoods.
Thus, saying that some sentence S is analytic isn't some kind of epistemic free pass for you to accept that sentence. All it does is semantically push all your chips into the center of the table with regard to S. Whereas before you ran the risk that S would express a falsehood, now there's a better chance that S will express a truth, but if it doesn't both S and a bunch of other sentences in your language will be totally meaningless.
So, here's my current position: the analytic-synthetic distinction is real, but it doesn't give the epistemological free lunch* which the logical positivists hoped it would.
*i.e. just saying that facts about something (like math) is analytic doesn't banish mysteries about how we came to know these facts.
Saturday, November 7, 2009
Davidson Obsession
Davidson proposes (in "Truth and Meaning") that for a language to be learnable by finite creatures like us there must be a finite collection of axioms which entails all true statements of the form '"Snow is white" is true if and only if snow is white'. Then he, and his followers, argue that people with various kinds of theories can't satisfy this constraint e.g. that nominalists can't get a theory that entails the right truth conditions for mathematical statements without using axioms that quantify over abstracta.
Something about this argument strikes me as fishy, and I've spent hours obsessing over it at various times, replacing one putative "refutation" with another. :( But I can't stop thinking about it, so here's my newest attempt.
First, grant that for someone to count as understanding some words they need to know all the relevant instances of Tarski's T schema. So they have to be disposed to assent to every such sentence. Now, as every sophomore seeing the Davidson for the first time points out, it's trivially easy to make a finite program that 'assents' to every query that's instance of the T schema in a given language, or enumerates all such instances. But Davidson requires more, there needs to be a finite collection of axioms, which logically entail all the instances. This is what gives Davidson's claim its potential bite. But now, we ask, why think this?
EITHER
Davidson thinks that to know the T schema you need to be able to consciously deduce them other things you antecedently know. In this case the requirement that each instance of the T schema must be deducible from a finite collection of axioms would be motivated.
But this can't be right because no one can consciously produce such an axiomatization for our language. If we learned the T schema by consciously deriving it from some axioms, we should be able to state the axioms. Therefore, conscious deduction does not happen, and cannot be required.
OR
Davidson allows that it suffices for each instance of the T schema to individually feel obvious to you, (and for you to be able to draw all the right logical consequences from it etc.)
But to explain the fact that each sentence of this form feels obvious when you contemplate it, we just need to imagine your brain is running the sophomore-objection program which checks every queried string for being an instance of the T schema and then causes you to find a queried sentence obvious if it is an instance. Once we are talking about subpersonal processes there is no reason to model them as making derivations in first order logic, so the requirement is unmotivated.
Perhaps Davidson might argue that the subpersonal processes doing the recognition are somehow doing something equivalent to quantifying over abstracta, so the nominalist, at least, would have a problem. But do subpersonal processes really count as quantifying over anything? And if they do, is there any reason we have to agree with their opinions about ontology?
Something about this argument strikes me as fishy, and I've spent hours obsessing over it at various times, replacing one putative "refutation" with another. :( But I can't stop thinking about it, so here's my newest attempt.
First, grant that for someone to count as understanding some words they need to know all the relevant instances of Tarski's T schema. So they have to be disposed to assent to every such sentence. Now, as every sophomore seeing the Davidson for the first time points out, it's trivially easy to make a finite program that 'assents' to every query that's instance of the T schema in a given language, or enumerates all such instances. But Davidson requires more, there needs to be a finite collection of axioms, which logically entail all the instances. This is what gives Davidson's claim its potential bite. But now, we ask, why think this?
EITHER
Davidson thinks that to know the T schema you need to be able to consciously deduce them other things you antecedently know. In this case the requirement that each instance of the T schema must be deducible from a finite collection of axioms would be motivated.
But this can't be right because no one can consciously produce such an axiomatization for our language. If we learned the T schema by consciously deriving it from some axioms, we should be able to state the axioms. Therefore, conscious deduction does not happen, and cannot be required.
OR
Davidson allows that it suffices for each instance of the T schema to individually feel obvious to you, (and for you to be able to draw all the right logical consequences from it etc.)
But to explain the fact that each sentence of this form feels obvious when you contemplate it, we just need to imagine your brain is running the sophomore-objection program which checks every queried string for being an instance of the T schema and then causes you to find a queried sentence obvious if it is an instance. Once we are talking about subpersonal processes there is no reason to model them as making derivations in first order logic, so the requirement is unmotivated.
Perhaps Davidson might argue that the subpersonal processes doing the recognition are somehow doing something equivalent to quantifying over abstracta, so the nominalist, at least, would have a problem. But do subpersonal processes really count as quantifying over anything? And if they do, is there any reason we have to agree with their opinions about ontology?
Produce the Code!
There's a three-way-debate going on between those who want to understand our ability to think in terms of manipulation of intrinsically meaningful items in the head (physical token sentences of a language of thought) vs. merely in terms of connections vs. behaviorists who think it doesn't matter how our brain produces suitable behavior.
Obviously, one would like to know a lot more about the neuroscience of language use. But, so far as I can tell, the philosophical aspects of this debate could be resolved right now, by producing toy blueprints/sample code. Then we look at the code, and consider thought experiments in which the brain actually turns out to work as indicated in the code...
Linguistic Behaviorism vs. Non-behaviorism:
If you think that stuff about how competent linguistic behavior is produced can be relevant to meaning, produce sample code A and B with the same behavioral outputs, such that we would intuitively judge a brain that worked in ways A vs. B would mean different things by the same words. [I think Ned Block has done this with his blockhead]
If you think stuff inside the head also establishes determinacy of reference, contra Quine, produce two pieces of sample code A and B for a program that e.g. outputs "Y"/"N" to the query "Gavagai?", such that we would intuitively say people whose brains worked like A meant rabbit and those that worked like B meant undetached rabbit part.
Language of Thought vs. Mere Conectionism:
If you are a LOT-er who thinks things the brain don't just co-vary with horses, but can actually mean `horse', produce sample code which generates verbal behavior, in response to sensory inputs, in such a way that we would intuitively judge pieces of the memory of a robot running that program to have meanings.
Then, produce sample code that works in a "merely conectionist way" and provide some argument that the brain is more likely to turn out to work in the former way.
[NOTE it does not suffice merely to give a program that derives truth conditions for sentences, unless you also want to posit a friendly homunculus who reads the sentences and works out what proper behavior would be. What your brain ultimately needs to do is produce the correct behavior! So, if you want to compare the efficiency of mere conectionist vs. LOT-like theories of how your brain does what it does, you need to write toy programs that evaluate evidence for snow being white, rocks being white, sand being white and respond appropriately- not just the trivial program that prints out an infinite list of sentences. "Snow is white" is true iff Snow is white. "Sand is white" is true iff sand is white... ]
Charitably, I think the LOT-ers want to say that the only feasible way of making something that passes the Turing test will be to use data structures of a certain kind. But until they can show some samples of what data structures would and wouldn't count, it's really hard to understand this claim. (I mean, the claim is that you will need data structures whose tokens count as being about something. But which are these?).
Obviously, one would like to know a lot more about the neuroscience of language use. But, so far as I can tell, the philosophical aspects of this debate could be resolved right now, by producing toy blueprints/sample code. Then we look at the code, and consider thought experiments in which the brain actually turns out to work as indicated in the code...
Linguistic Behaviorism vs. Non-behaviorism:
If you think that stuff about how competent linguistic behavior is produced can be relevant to meaning, produce sample code A and B with the same behavioral outputs, such that we would intuitively judge a brain that worked in ways A vs. B would mean different things by the same words. [I think Ned Block has done this with his blockhead]
If you think stuff inside the head also establishes determinacy of reference, contra Quine, produce two pieces of sample code A and B for a program that e.g. outputs "Y"/"N" to the query "Gavagai?", such that we would intuitively say people whose brains worked like A meant rabbit and those that worked like B meant undetached rabbit part.
Language of Thought vs. Mere Conectionism:
If you are a LOT-er who thinks things the brain don't just co-vary with horses, but can actually mean `horse', produce sample code which generates verbal behavior, in response to sensory inputs, in such a way that we would intuitively judge pieces of the memory of a robot running that program to have meanings.
Then, produce sample code that works in a "merely conectionist way" and provide some argument that the brain is more likely to turn out to work in the former way.
[NOTE it does not suffice merely to give a program that derives truth conditions for sentences, unless you also want to posit a friendly homunculus who reads the sentences and works out what proper behavior would be. What your brain ultimately needs to do is produce the correct behavior! So, if you want to compare the efficiency of mere conectionist vs. LOT-like theories of how your brain does what it does, you need to write toy programs that evaluate evidence for snow being white, rocks being white, sand being white and respond appropriately- not just the trivial program that prints out an infinite list of sentences. "Snow is white" is true iff Snow is white. "Sand is white" is true iff sand is white... ]
Charitably, I think the LOT-ers want to say that the only feasible way of making something that passes the Turing test will be to use data structures of a certain kind. But until they can show some samples of what data structures would and wouldn't count, it's really hard to understand this claim. (I mean, the claim is that you will need data structures whose tokens count as being about something. But which are these?).
The Problem of Logical Omniscience and Inferential Role
I just looked over a very old thing I wrote about the problem of logical omniscience. The problem of Logical Omniscience is: How can you count as believing one thing, while not believing (or even explicitly rejecting) something logically equivalent?
I suggested that propositions have certain preferred inferential roles, and that you count as believing that P to the extend that you are disposed to make enough of these preferred inferences, quickly and confidently enough.
So for example, someone can believe that a function is Turing computable but not that it's recursive, even though these two statements are provably equivalent, because they might be willing to make enough of the characteristic inferences associated with Turing computability, but not those for recursive-ness. (The characteristic inferences for "...is Turing computable" would be those that people call "immediate" from the definition of Turing computability, and ditto for the -different- characteristic inferences for recursive).
This is interesting because:
1. The characteristic inferences associated with a proposition/word will NOT supervene on the inferences which that proposition/word justifies. Since Turing computability and recursive-ness are probably equivalent, the very same inferences are JUSTIFIED for each one of them. But "This function is Turing computable" and "This function is recursive" need to have different characteristic inferences, to explain how you can know one but not the other.
2. Given (1), if you want to attach meanings to individual words, these meanings should not only include things like sense and reference which help build up the truth conditions for sentences involving that word, but also something like characteristic inferences, which helps you chose when to attribute someone a belief involving this word, rather than another which word would always contribute in exactly the same way to the truth conditions of any sentence.
2. It's commonly said that aliens would have the same math as us. If this means that they wouldn't disagree with us about math that sounds right. But if it means that they would (before contact with humans) believe literally the same propositions as we do, I don't think so.
For, think about all the many different notions we could define which would be equivalent to Turing computability, but have different characteristic inferences. If you buy the above, each of these notions corresponds to a slightly different thought. Thus for the aliens to believe the exact same mathematical claims as we do, they would have to have the same definitions/mathematical concepts. But it's much less clear whether aliens would have the same aesthetic sense guiding what definitions they made/mathematical concepts they came up with. For example, I'm much more convinced that aliens would accept topology than that they would have come up with it. I mean, just think about the different kinds of math developed just by humans in different eras and countries.
I suggested that propositions have certain preferred inferential roles, and that you count as believing that P to the extend that you are disposed to make enough of these preferred inferences, quickly and confidently enough.
So for example, someone can believe that a function is Turing computable but not that it's recursive, even though these two statements are provably equivalent, because they might be willing to make enough of the characteristic inferences associated with Turing computability, but not those for recursive-ness. (The characteristic inferences for "...is Turing computable" would be those that people call "immediate" from the definition of Turing computability, and ditto for the -different- characteristic inferences for recursive).
This is interesting because:
1. The characteristic inferences associated with a proposition/word will NOT supervene on the inferences which that proposition/word justifies. Since Turing computability and recursive-ness are probably equivalent, the very same inferences are JUSTIFIED for each one of them. But "This function is Turing computable" and "This function is recursive" need to have different characteristic inferences, to explain how you can know one but not the other.
2. Given (1), if you want to attach meanings to individual words, these meanings should not only include things like sense and reference which help build up the truth conditions for sentences involving that word, but also something like characteristic inferences, which helps you chose when to attribute someone a belief involving this word, rather than another which word would always contribute in exactly the same way to the truth conditions of any sentence.
2. It's commonly said that aliens would have the same math as us. If this means that they wouldn't disagree with us about math that sounds right. But if it means that they would (before contact with humans) believe literally the same propositions as we do, I don't think so.
For, think about all the many different notions we could define which would be equivalent to Turing computability, but have different characteristic inferences. If you buy the above, each of these notions corresponds to a slightly different thought. Thus for the aliens to believe the exact same mathematical claims as we do, they would have to have the same definitions/mathematical concepts. But it's much less clear whether aliens would have the same aesthetic sense guiding what definitions they made/mathematical concepts they came up with. For example, I'm much more convinced that aliens would accept topology than that they would have come up with it. I mean, just think about the different kinds of math developed just by humans in different eras and countries.
Freedom and Resentment in Epistemology
Everyone likes to talk about Neurath's boat, but I think common discussion leaves out something critical. Not only do we all start with some beliefs, but we also start out accepting certain methods of revising those beliefs, in response to new experience or in the course of further reflection. This is crucial because it brings out a deep symmetry between all believers:
At a certain level of description, there's no difference between the atheist philosopher who finds it immediately plausible that bread won't nourish us for a while and then suddenly poison us, and the religious person who finds it immediately plausible that god exists, or the madman who finds it immediately plausible that he's the victim of a massive conspiracy. Everyone involved is (just) starting with whatever they feel is initially plausible, and revising this in whatever ways they find immediately compelling.
Thinking about things this way, can make one feel uncomfortable in deploying normative notions of justification. Being justified is supposed to be a matter of (something like) doing the best you can, epistemically, whether or not you are lucky enough to be right. But there's no difference in effort (or even, perhaps, in care) between the philosopher and the madman. It's just that the philosopher is lucky enough to find immediately compelling principles *that happen to be mostly true*, and inference methods *that happen to be mostly truth-preserving/reliable*. So how can we say that one of them is justified?
One reaction to this is to deny that there is such a thing as epistemic normativity. There are facts about which people have true beliefs, and which of them are on course to form more true beliefs, which belief forming mechanisms are reliable (in various senses) etc. But there are no epistemically normative facts e.g. facts about which reliably true propositions are OK to to assume, or which reliable inference methods are OK to employ without any external testing.
Another possible reaction is to say that even though "ultimately" there's no difference between believing finding it obvious that bread will nourish you if it always has in the past vs. believing you are the center of a conspiracy, there still are facts about justification. We can pick out certain broad methods of reasoning (logical, empirical, analytic(??), initially trusting the results of putative senses) which are both popular and generally truth preserving, and what it means to be justified is just to have arrived at a belief via one of those.
In either case, the result will give an answer to philosophical skepticism. The skeptic asks: "how can you be justified in believing that you have a hand, given that it depends on your just assuming without proof that you aren't a BIV?" Someone who has the first reaction can simply deny the contentious facts about justification. Someone who has the second reaction will be unimpressed by the point that they are "just assuming" that ~BIV. All possible belief is a matter of starting out "just assuming" some propositions and inference methods, and then applying the one to the other.
At a certain level of description, there's no difference between the atheist philosopher who finds it immediately plausible that bread won't nourish us for a while and then suddenly poison us, and the religious person who finds it immediately plausible that god exists, or the madman who finds it immediately plausible that he's the victim of a massive conspiracy. Everyone involved is (just) starting with whatever they feel is initially plausible, and revising this in whatever ways they find immediately compelling.
Thinking about things this way, can make one feel uncomfortable in deploying normative notions of justification. Being justified is supposed to be a matter of (something like) doing the best you can, epistemically, whether or not you are lucky enough to be right. But there's no difference in effort (or even, perhaps, in care) between the philosopher and the madman. It's just that the philosopher is lucky enough to find immediately compelling principles *that happen to be mostly true*, and inference methods *that happen to be mostly truth-preserving/reliable*. So how can we say that one of them is justified?
One reaction to this is to deny that there is such a thing as epistemic normativity. There are facts about which people have true beliefs, and which of them are on course to form more true beliefs, which belief forming mechanisms are reliable (in various senses) etc. But there are no epistemically normative facts e.g. facts about which reliably true propositions are OK to to assume, or which reliable inference methods are OK to employ without any external testing.
Another possible reaction is to say that even though "ultimately" there's no difference between believing finding it obvious that bread will nourish you if it always has in the past vs. believing you are the center of a conspiracy, there still are facts about justification. We can pick out certain broad methods of reasoning (logical, empirical, analytic(??), initially trusting the results of putative senses) which are both popular and generally truth preserving, and what it means to be justified is just to have arrived at a belief via one of those.
In either case, the result will give an answer to philosophical skepticism. The skeptic asks: "how can you be justified in believing that you have a hand, given that it depends on your just assuming without proof that you aren't a BIV?" Someone who has the first reaction can simply deny the contentious facts about justification. Someone who has the second reaction will be unimpressed by the point that they are "just assuming" that ~BIV. All possible belief is a matter of starting out "just assuming" some propositions and inference methods, and then applying the one to the other.
Thursday, November 5, 2009
Funny Footnote
Reading Mark Steiner's "Mathematics-Applications and Applicability", I noticed this footnote:
"Suppose we have a physical theory, like string theory, which postulates a 26 dimensional space. The number 26 happens to be the numerical value of the Tetragrammaton in Hebrew. Should this encourage us to try other of the Hebrew Names of God?"
[Note: in context, Steiner seems to think the answer to this question is yes]
"Suppose we have a physical theory, like string theory, which postulates a 26 dimensional space. The number 26 happens to be the numerical value of the Tetragrammaton in Hebrew. Should this encourage us to try other of the Hebrew Names of God?"
[Note: in context, Steiner seems to think the answer to this question is yes]
Sunday, November 1, 2009
Three jobs for logical structure:
You might think the "logical structure" of a sentence is a way of cutting it up into parts [eg. "John is happy" becomes "is happy(john)"] that does three things:
1. gets used by the logical theory that best captures all the valid inferences.
2. matches the metaphysical structure of the world.
3. explains how we are able to understand that sentence, by breaking it down into these parts, and understanding them.
However, it's not obvious that the method of segmentation which does any one of these things best should also do the others. I don't mean that this idea is crazy, just that it is a bold and substantive claim that logic unites cognitive science with metaphysics in this way.
It's also not obvious that *any* method of segmentation can do 1 or 2.
Task 1 might be impossible to perform because there might not be a unique best logical theory. If we think that the job of logic is to capture necessarily truth-preserving inferences, then second-order logic is logic. Any recursive axiomatization of second order logic will be supplementable to produce a stronger one - since the truths of second order logic aren't recursively axiomatizable. (One might hope though, that all sufficiently strong logics that don't say anything wrong will segment sentences the same way)
Task 2 might be impossible because the world might not have a logical structure to reflect. What do I mean by the world "having a logical structure"? I think there are two versions of the claim:
a. The basic constituents of the world are divided between the various categories produced by the correct segmentation e.g. concepts and objects in Frege's case.
This is weird because "constituents of the world" sound like they should be all be objects. But presumably objects don't join together to produce a sentence, so the kind of expressions used in your chunking up can't all be objects.
Its also weird because it just seems immediately weird to think of the world as having this kind of propositional structure, rather than our just using different propositions with structure to describe the world.
b. The objects that really exist (as opposed to those that are merely a facon de parler), are exactly those which are quantified over by true statements when these are formalized in accordance with the best method of segmentation. To misquote Quine: "the correct logical theory is the one such that, to be, is to be the value of a bound variable in the formalization of some true sentence in accordance with that theory."
So, for example, if mathematical objects can't be paraphrased away in first order logic, but they can using modal logic, the question of whether mathematical objects exist will come down to which (if either) of these of logics has the correct segmentation.
Finally, Task 3 is ambiguous between something (imo) silly and something about neuroscience.
The silly thing is that a correct segmentation should reflect what components *you* break up the sentence "John is happy" into, when you hear and understand it (presumably none).
The neuroscience is `what components does *your brain* break up this sentence, when processing it to produce correct future behavior, give rise to suitable patterns of qualititative experience for you etc.?' This is obviously metaphorical, but I think it makes sense. It seems very likely that there will be some informative algorithm which we can use to describe what your brain does when processing sentences (it might or might not be the same algorithm for different people's brains). And, if so, it's likely that there will be some natural units which this algorithm uses.
1. gets used by the logical theory that best captures all the valid inferences.
2. matches the metaphysical structure of the world.
3. explains how we are able to understand that sentence, by breaking it down into these parts, and understanding them.
However, it's not obvious that the method of segmentation which does any one of these things best should also do the others. I don't mean that this idea is crazy, just that it is a bold and substantive claim that logic unites cognitive science with metaphysics in this way.
It's also not obvious that *any* method of segmentation can do 1 or 2.
Task 1 might be impossible to perform because there might not be a unique best logical theory. If we think that the job of logic is to capture necessarily truth-preserving inferences, then second-order logic is logic. Any recursive axiomatization of second order logic will be supplementable to produce a stronger one - since the truths of second order logic aren't recursively axiomatizable. (One might hope though, that all sufficiently strong logics that don't say anything wrong will segment sentences the same way)
Task 2 might be impossible because the world might not have a logical structure to reflect. What do I mean by the world "having a logical structure"? I think there are two versions of the claim:
a. The basic constituents of the world are divided between the various categories produced by the correct segmentation e.g. concepts and objects in Frege's case.
This is weird because "constituents of the world" sound like they should be all be objects. But presumably objects don't join together to produce a sentence, so the kind of expressions used in your chunking up can't all be objects.
Its also weird because it just seems immediately weird to think of the world as having this kind of propositional structure, rather than our just using different propositions with structure to describe the world.
b. The objects that really exist (as opposed to those that are merely a facon de parler), are exactly those which are quantified over by true statements when these are formalized in accordance with the best method of segmentation. To misquote Quine: "the correct logical theory is the one such that, to be, is to be the value of a bound variable in the formalization of some true sentence in accordance with that theory."
So, for example, if mathematical objects can't be paraphrased away in first order logic, but they can using modal logic, the question of whether mathematical objects exist will come down to which (if either) of these of logics has the correct segmentation.
Finally, Task 3 is ambiguous between something (imo) silly and something about neuroscience.
The silly thing is that a correct segmentation should reflect what components *you* break up the sentence "John is happy" into, when you hear and understand it (presumably none).
The neuroscience is `what components does *your brain* break up this sentence, when processing it to produce correct future behavior, give rise to suitable patterns of qualititative experience for you etc.?' This is obviously metaphorical, but I think it makes sense. It seems very likely that there will be some informative algorithm which we can use to describe what your brain does when processing sentences (it might or might not be the same algorithm for different people's brains). And, if so, it's likely that there will be some natural units which this algorithm uses.
Labels:
ontology,
philosophy of language,
philosophy of math
Is there a logic that...
Is there a logic that would capture inferences like:
-"John is very rich" --> "John is rich"
-"John is very very very very rich"--->"John is very rich"
Obviously it won't do to say "rich(John) ^ very (John)".
-"John is very rich" --> "John is rich"
-"John is very very very very rich"--->"John is very rich"
Obviously it won't do to say "rich(John) ^ very (John)".
Subscribe to:
Posts (Atom)