Here's something I've been puzzled by for ages (well, since taking Selim Berker's awesome metaethics class).

Suppose that Jim likes strawberries* best on even months and chocolate best on odd months, and each time the month changes, he says that he has discovered that chocolate has/lacks the property of deliciousness (I mean: he uses exactly the same verbal forms of expression that he would to say that he discovered that my keys were in my backpack all along, draws logical consequences the same way etc). How can we understand someone with this kind of practice?

What's so crazily perplexing here, is that Jim's use of a single sentence "chocolate is delicious" seems to be associated with two norms:

1) take present and past self to be disagreeing about same proposition.

2) judge for yourself in a way that agrees with your dispositions to act/enjoy (i.e. it's not ok to say this is more delicious but I don't like it)

If you have a practice with both these norms, psychological instability gives rise to factual uncertainty. But it's extremely hard to disentangle things.

You can capture aspect 1) of Jim's use, if you interpret him as talking about the property of being 'the kind of thing people enjoy eating', but you can't make sense of 2). On the other hand, you can make sense of 2) if you interpret him as saying something like 'I like chocolate' or 'yay, chocolate' but you can't make sense of why he speaks as though he is discovering that some proposition he previously thought true was false (In the first case, you can't make sense of why he says he's disagreeing with his past self, in the second case it looks surprising that 'chocolate is delicious' logically embeds in the ordinary way, whereas 'yay chocolate' doesn't - this is called the frege geach problem).

Like someone who uses the word 'Tonk' (Pryor's connective that has the introduction rules for or, and the elimination rues for and) you can sort of describe how Jim behaves, and understand all scientific facts about it, but you can't translate his propositions as expressing any proposition.

[This reminds me of how McDowell says that the ethical antirealist can't understand the ethical realist. Maybe we are like the ethical antirealist, in this situation. But McDowell says that the antirealist can't understand the realist as even having a practice/'going on in the same way'. Whereas I think I have a very clear notion of what Jim's practice is, and what it would take for him to go in in the same way, there's just no practice of mine that's sufficiently similar to his for me to translate him by equating his sentence 'chocolate is delicious' with any sentence of mine. ]

However, unlike Tonk (which lets you infer any proposition in your language from any other), accepting Jim's practice doesnt seem prone to lead him to false beliefs that don't involve the word 'delicious'. (Assuming you don't count false beliefs like: 'there's something I learned about chocolate which I didn't know last month') So, maybe a better analogy would be the world 'gleb' which has the introduction rule: if it's sunny infer that it's gleb, and the elimination rule: if it's gleb, then infer that you should mail Sharon 5 dollars. Or the word 'chasitity' as used by someone who thinks that's a virtue.

But what exactly is going wrong here? And is it the same thing in all three cases? It's very tempting to say there's something wrong with all practices "of this kind", but what's the relevant kind?

*I picked the example of what you might call 'realism about deliciousness', since it's close to a real life example most people I've met seem to be instinctively anti-realists about deliciousness. But, if you happen to share Jim's way of using the word delicious, just pick some other arbitrary combination - e.g. imagine someone who accepts the norms 1) believe that whatever your actual height is, is the X-est height for a human 2) take yourself to be disagreeing with with past selves who accepted different claims about what the X-est height is.

## Saturday, August 29, 2009

### OQ1-Why is it epistemically OK to assume some necessary truths (2+2=4) but not others (the four color theorem)?

Here's the first "philosophical open question" that's currently driving me mad. Any ideas anyone?

Why is it epistemically OK to assume some necessary truths (2+2=4) but not others (the four color theorem)? We say that people who assume that 2+2=4 just because it feels obvious to them count as knowing, but those people who assumed the 4 color theorem in the same way would not count as knowing. Note that in this case, neither belief is consciously produced by any kind of method, which you might say is reliable in one case but not others (can imagine creatures for whom the suberpersonal/unconscious faculty that produces the feeling that the 4 color theorem is obvious is reliable).

It's tempting to be a Cornell Realist about justification here. You might say: there are some true/valid principles/methods everyone accepts- you are justified in believing whatever you can prove from these true principles using these truth preserving methods of inference.

But what you say about justification is tied to what you say about everything else by the fact Moore pointed out - that you can't say: p, but I'm not justified in believing that p. If this Moorean fact is indeed the central part of our practice of talking about justifications that it seems to be, every situation where you believe that p has to be one where (if you have the concept of justification) you'd either assent to 'I am justified in believing that p', or be inclined to stop believing that p. Taking the Cornell Realist line - the 4 color theorem is true, but I don't have a proof of it from things that most ppl would accept, so I'm not justified in believing it - sounds totally bizare. I mean, it seems to be a core part of the way that we use the word justification that saying 'P, but I'm not justified in believing that P' is not an option.

So, maybe it's best to say that one is justified in assuming any and all mathematical truths, should these feel brutely obvious to one (though, of course, you would not be justified in believing mathematical truths which you derive from false premises). Note that you could still criticize someone for generating a random mathematical sentence and then taking meds that would make them find that sentence obvious. Even it they pick a true proposition, and hence count as having fully justified beliefs after they take the meds, their conduct before hand involves a huge risk of producing false beliefs. Still, this is quite a radical and somewhat counterintuitive conclusion though!

Why is it epistemically OK to assume some necessary truths (2+2=4) but not others (the four color theorem)? We say that people who assume that 2+2=4 just because it feels obvious to them count as knowing, but those people who assumed the 4 color theorem in the same way would not count as knowing. Note that in this case, neither belief is consciously produced by any kind of method, which you might say is reliable in one case but not others (can imagine creatures for whom the suberpersonal/unconscious faculty that produces the feeling that the 4 color theorem is obvious is reliable).

It's tempting to be a Cornell Realist about justification here. You might say: there are some true/valid principles/methods everyone accepts- you are justified in believing whatever you can prove from these true principles using these truth preserving methods of inference.

But what you say about justification is tied to what you say about everything else by the fact Moore pointed out - that you can't say: p, but I'm not justified in believing that p. If this Moorean fact is indeed the central part of our practice of talking about justifications that it seems to be, every situation where you believe that p has to be one where (if you have the concept of justification) you'd either assent to 'I am justified in believing that p', or be inclined to stop believing that p. Taking the Cornell Realist line - the 4 color theorem is true, but I don't have a proof of it from things that most ppl would accept, so I'm not justified in believing it - sounds totally bizare. I mean, it seems to be a core part of the way that we use the word justification that saying 'P, but I'm not justified in believing that P' is not an option.

So, maybe it's best to say that one is justified in assuming any and all mathematical truths, should these feel brutely obvious to one (though, of course, you would not be justified in believing mathematical truths which you derive from false premises). Note that you could still criticize someone for generating a random mathematical sentence and then taking meds that would make them find that sentence obvious. Even it they pick a true proposition, and hence count as having fully justified beliefs after they take the meds, their conduct before hand involves a huge risk of producing false beliefs. Still, this is quite a radical and somewhat counterintuitive conclusion though!

## Tuesday, August 25, 2009

### Lob's Theorem, Mechanism and Authority

In 'Lob's Theorem as a Limitation on Mechanism' Detlefson uses Lob's theorem to argue for the 'general limitative thesis' that "There are epistemically valuable humanoid systems of belief A such that no humanoid observer O who uses A as an 'additive' epistemic authority can either know or truly believe of A that she is mechanizable (i.e. that her belief-set, A, is r.e.)."

By saying that O uses A as an 'additive' epistemic authority, Detlefson means that A is disposed to assert something which O couldn't already learn by deriving it from his current beliefs. And, for O to take A as an epistemic authority means that he's disposed to accept all sentences of the form 'if A asserts that X, then X'.

Now, Lob's theorem says that (given a formal provability PROV predicate which satisfies certain constraints) you can only prove "If PROV(X) then X", in situations where you can directly prove that X. And Detlefson (in my opinion) correctly draws the consequence that the following two things can't simultaniously be true: a)A is an additive epistemic authority for you b) A is mechanizable, and you know that a certain program M recursively enumerates all the things that A is disposed to say.*

But, note the following crucial difference between this point and the 'general limitative thesis' above. From the fact that, if you knew which program enumerates all the sentences A is disposed to accept, A would no longer be an *additive* epistemic authority for you, it doesn't follow that you can't truly believe there is some program which enumerates the sentences which, something that's currently an additive epistemic authority for you, would accept.

Thus, I'm puzzled when, Detlefson immediately follows a statement that you can't know which program captures the behavior of something that's an additive epistemic authority for you, with "we therefore maintain that", and then a statement of the General Limitative Thesis, which says that you can't know (or even truely believe!) that there is some such program.

*Actually, you don't need Lob's theorem for this. For, if A is an additive authority for me, there must be some proposition P which A can derive but I can't. But M recursively enumerates the set of sentences which A is disposed to accept. So, there's some stage t at which A arrives at proposition P, and the t-th step of program M is to output P. But I can prove that the t-th stage of program M outputs P, and I believe that M enumerates all and only the sentences A accepts, AND I believe of each proposition that if A accepts, the it's true. So, from these things, I too can derive that P. Contradiction.

By saying that O uses A as an 'additive' epistemic authority, Detlefson means that A is disposed to assert something which O couldn't already learn by deriving it from his current beliefs. And, for O to take A as an epistemic authority means that he's disposed to accept all sentences of the form 'if A asserts that X, then X'.

Now, Lob's theorem says that (given a formal provability PROV predicate which satisfies certain constraints) you can only prove "If PROV(X) then X", in situations where you can directly prove that X. And Detlefson (in my opinion) correctly draws the consequence that the following two things can't simultaniously be true: a)A is an additive epistemic authority for you b) A is mechanizable, and you know that a certain program M recursively enumerates all the things that A is disposed to say.*

But, note the following crucial difference between this point and the 'general limitative thesis' above. From the fact that, if you knew which program enumerates all the sentences A is disposed to accept, A would no longer be an *additive* epistemic authority for you, it doesn't follow that you can't truly believe there is some program which enumerates the sentences which, something that's currently an additive epistemic authority for you, would accept.

Thus, I'm puzzled when, Detlefson immediately follows a statement that you can't know which program captures the behavior of something that's an additive epistemic authority for you, with "we therefore maintain that", and then a statement of the General Limitative Thesis, which says that you can't know (or even truely believe!) that there is some such program.

*Actually, you don't need Lob's theorem for this. For, if A is an additive authority for me, there must be some proposition P which A can derive but I can't. But M recursively enumerates the set of sentences which A is disposed to accept. So, there's some stage t at which A arrives at proposition P, and the t-th step of program M is to output P. But I can prove that the t-th stage of program M outputs P, and I believe that M enumerates all and only the sentences A accepts, AND I believe of each proposition that if A accepts, the it's true. So, from these things, I too can derive that P. Contradiction.

Subscribe to:
Posts (Atom)