Tuesday, September 29, 2009

bookclub: McFarlane contra Wright and Hale

I just finished reading a McFarlane's article in the newest Synthese, about neo-Logicism. He argues that the ideas which Wright and Hale put forward in favor of our license to adopt Hume's principle as a stipulation, would seem to equally support just directly laying down the axioms of PA.

This seems 100% right to me, and it occurs to me that my thesis proposal about how a priori knowledge is possible, can be seen as taking this strategy.

However, I wouldn't say that we literally do stipulate the axioms of PA (would anyone say this? I mean it seems like an obvious psychological/historical fact that people generally don't make any such stipulation). Rather, we come to find these statements (and others obvious) which then functions like a stipulation in that it helps determine the meaning of our words in such a way as to make it likely that these statements we find obvious will express truths.

Just saying this doesn't completely dispel worries about the epistemology of math though. For not all stipulations are OK - some stipulations, like those for "tonk" would not lead to a practice that counted as reliable reasoning. So there are two remaining questions.

1. Given that some stipulations are bad, how do we manage to make good stipulations so often?
2. Given that some stipulations are bad, how can we be justified in making any stipultion?

I think the answer to 1 is a combo of quinean/millian theory revision with a nudge from nature, together with facts about the relative profusion of abstract objects as targets for our stipulations.

I think the answer to 2 is that we can't help but start from what we find obvious and reason in ways that seem compelling, so we have prima facie warrant to do this - in the absence of any reason to think to doubt that these feelings of obviousness could be reliable in a given sphere.

Sunday, September 20, 2009

Justification Puzzle #3: Indubitably, my dear Dr. Leary

What does it mean to say that a proposition is self-evident, or indubitable? Is our intuitive notion even coherent?

Here are three ways you might try to clarify what it is for a proposition p to be indubitable, and why they don't work. The puzzle is to do better.

1. It's metaphysically impossible to doubt whether p.

The idea behind this approach is that there are some things, like logic, which are so fundamental that if you tried to doubt them, you wouldn't count as thinking at all - and hence you wouldn't count as doubting.

But, the problem is, there don't seem to be any single propositions with this feature. If we think about someone rejecting "logical reasoning" as a block, arguably they wouldn't count as a thinker. But, as Williamson has recently emphasized, this property doesn't seem to hold for any single propositions.He takes the example 'All vixens are vixens', and describes two apparently intelligible philosophers whose theories (e.g. that a statements about all Xs is only true if there's some instance, and that the apparent existence of vixens is a hoax) would lead them to reject this claim.

Here are two more examples (of my own devising), of putatively indubitable propositions which it's metaphysically possible to doubt.


"I am thinking"
Alice is a dualist in her philosophy of mind, and a hardcore externalist in her philosophy of language. Thus she thinks that, what it takes for the experience of having some strong of words pass through your mind, depends on the role that your dispositions to use these words in your further thoughts and actions plays in your life. e.g. The same phenomonology "I would like a glass of water" corresponds to one thought in the mind of someone from earth, and another in the mind of someone from twin earth.

What about a brain+phenomenology that randomly pops into existence in the middle of the sun and is burned up the next second? Since it doesn't have a body, or any meaningful dispositions to use words a certain way, Alice would say that the brain doesn't count as thinking.

Now, when reading Descartes' meditations, Alice thinks: how do I know that I am not such a brain, popped randomly into existence for one second, and about to be consumed by fire the next? Such a brain would have exactly the phenomenology that I am having, yet it would not count as thinking.

"I am having an experience as if of a red patch"
Bob thinks, I'm certainly inclined to characterize the experience i'm having as one as if of a red object. But yesterday Alice asked me if I had ever seen fucia and green together, and I said yes, that rug over there is fucia and green. But then everyone else at the party pointed out that fucia is a kind of pink, not purple like the rug. So I was wrong when I judged 'I seem to see something fucia' before answering Alice's question. How do I know the same thing isn't happening now?

Admittedly, people certainly talk about whether things are red much more than about whether they are fucia, so if i was similarly wrong about how to identify (experiences as if of) red things, its likely it would have gotten caught by now. But I might just be really unlucky, like those brains in vats Alice keeps telling me about.

2. It is impossible to conceive of a scenario in which one is wrong in judging that p.

This analysis fails because, by this definition, any necessary truth would be indubitable. Suppose (unknown to me) there are infact infinitely many twin primes, and then I hear someone say 'Hasty Harry claims to have proved that there are infinitely many twin primes'.

Intuitively, it seems reasonable for me to doubt whether Harry is right, and whether there are, in fact, infinitely many primes. On its own, the latter claim is not indubitable - e.g. we want a proof partly because this will establish this apparent fact, on the basis of claims that are indubitable.

However, I cannot conceive of a scenario in which I am wrong in judging that there in infinitely many twin primes, for to do this would require conceiving of a scenario in which there are not infinitely many twin primes. But (on the assumption that there really are infinitely many twin primes) what would count as conceiving of a scenario in which there are not? Surely I don't need to do this to be justified in doubting that there are infinitely many twin primes, suspending judgment until I find firmer proof etc.


3. Psychologically, people are unable to feel doubt about whether p.

As Hume pointed out, we can doubt different things in the philosopher's closet vs. at the billiard table. Adding mind altering drugs that inspire confidence (alcohol) or paranoia (caffeine) or decrease the length of arguments you can hold in your head at once only extends the range.

Perhaps we should say something is indubitable if there is no possible psychological condition under which someone could doubt it. But, given the current state of psychology, do we have any evidence that any proposition has this feature?

Do we even have reason to think that there is such a limit? Absent a priori arguments that there are propositions about which doubt is unintelligible (see the argument against 1) it might be that for any thinkable proposition, there's some possible psychological state of entertaining doubt as to whether p?

Saturday, September 19, 2009

Verificationism, Dummett, and the undead

Arguing against verificationism in 2009 seems a bit like flogging a mummified horse, but recent encounters with a number of smart people who take Dummett's motivations for intuitionism serious, have convinced me that sometimes the dead can walk again.

So, let me just say why I think the following idea is a total nonstarter:

`You can't understand a claim (e.g., "there are infinitely many twin primes") unless there's something you'd accept as verifying it, or refuting it. Thus, if (as probably true for godel+anti-penrose reasons) there are arithmetical statements which are independent of all the mathematical reasoning we accept (i.e. that we would accept something as a proof or refutation), we don't understand them. Hence, we don't know that "C or ~C"(pardon the abuse of notation) in cases where C is such an independent statement.'


This idea can perhaps be motivated by the good Wittgenstinean intuition that your understanding of a word consists in something like your ability to use it correctly. But, it doesn't strictly follow from that idea (at most, what follows is that meaning supervenes on use, not that there is some use which verifies or falsifies every statement one understands). And, indeed, this verificationist argument can't be right.

For, either:

1. 'Verifying' a claim means, becoming completely certain of that claim - as it were, assigning probability 1 to it, and hence being unwilling to ever later question it, whatever your future experience is. In this case, what could possibly verify ordinary scientific claims like 'There is at least one black raven?' Whatever experience you have with a black raven, there's always some further experience which could give you reason to doubt this claim.

2. `Verifying' a claim includes being inclined to treat something as non-decisive evidence for it, having some experience that makes you guess that C (or not C), even though you'd be willing to revise it. But in this case, the independence of a statement, hardly establishes that there's nothing that would make us more likely to guess that P or that ~P.

There are lots of things we take as giving us strong reason to believe a mathematical claim, without quite amounting to a proof of it (think about how many people believe P !=NP, but don't have a proof!).

In fact, if you allow facts about how we respond to seeming to see a proof or hearing that there's a proof of various related propositions, it's pretty much trivial that for any mathematical proposition there is something which we would take as evidence that it's true (e.g. seeming to see a proof of it from ZFC, or a proof of some other claim that generalizes it, or of some special case, where we expected counterexamples to be lurking).


So, if our neo-verificationist takes verification to require certainty, even the simplest empirical statements will be unverifiable. But, on the other hand, if he is only saying that in order to understand a claim there must be some possible experience that we would take as (strong) evidence for it, then a)this seems to be true and b)he has given us no reason to doubt it.

If, as I recently read 'verificationism has never been decisively refuted', I think this is only because verificationism has never been stated reasonably clearly. Once you actually try to say what you require for a statement to count as having been verified, the project completely crashes.

Do any fans of verification out there care to answer this challenge, or any Dummett fans care to propose a better interpretation?

Wednesday, September 16, 2009

Justification Puzzle #2: The TF's Dilemma

Suppose someone makes the following inference, and you have to decide whether they count as being justified in accepting the conclusion.

3 is odd
3 is prime

Intuitively, one wants to say: if they are making the inference "x is odd ---> x is prime" then the answer is no, but if they are making the inference "3 is F ---> 3 is prime" then they are. So how do we tell/what determines what inference they are making?

a. phenomenology: Is the answer to this question a matter of how the subject feels when making the transition above? But, let me stipulate that this is a psychologically basic inference for them in the sense that they don't consciously think of any rule before making it (on pain of regress there have to be some inferences which have this status for us, if we make any inferences at all). So all they experience is saying to themselves with confidence and conviction "3 is odd" and then, a moment later "3 is prime".

b.what other inferences they would make: Or maybe what matters is, whether they tend to accept other things, that are instances of the bad inference procedure, but not the good one (e.g. would they say "15 is odd" and then, a moment later, "15 is prime")?

But it will always be the case that a person is disposed to accept some bad, and some good, particular inferences. So, how do we carve up the space of different inferences they are willing to make into different "kinds" of inference?

How do we decide that e.g. being willing to infer `15 is odd'...`15 is prime', counts against being justified in inferring `3 is odd'...`3 is prime', but being willing to infer `3 is greater than 2'...`the number of gods is greater than 2', does not?

c. neuroscience: Well, maybe the workings of the brain will be best described by carving it up into different mechanisms which produce different classes of inferences. So, maybe we need to look at the class of inferences which are made via the workings of the *same brain mechanism* that lead the person to say "3 is odd...3 is prime"? But we know almost nothing about how brain-functioning is best individuated into different `processes'. So, if this were right, it would seem that we aren't yet in a position to evaluate claims about justification, even in normal cases.

d. just saying there's a brute primitive division of inferences into natural kinds: Ok, this is the best I can come up with, but it's certainly not very attractive.

p.s. I realize this is kindof like the generality problem for reliablism. But this problem seems to apply to everyone who accepts that we can be justified in making some logical/mathematical inferences.

Tuesday, September 15, 2009

Island of Utilitarians Puzzle

Just a random thing I was wondering about:

How would you tell the difference between an island of utilitarians who used the word "gala" to mean good, vs. an island of non-utilitarians who wanted utility to be maximized (in the same way that I might want postumous fame), but did not believe that utility maximization was morally good, and use "gala" to mean utility-conducive?

Intuitively there's a difference between thinking that X is morally good, vs merely desiring that X.

But, in both cases, I will a) try to bring X about b) try to convince others help me bring X about (if I think I can do so, and their help might be valuable), c) desire that I continue to desire that X, if my desiring X in the future can help bring x about...

So how can you tell the difference?

Thursday, September 10, 2009

Against Kim's argument against naturalized epistemology

In Epistemology Naturalized, Quine suggests that we stop worrying about epistemic normativity, and just study the engineering problem of how to get into situations where we reliably form true beliefs. So, for example, we might do scientific studies on the reliability of witnesses of various kinds of events, under various conditions. Or, we might use informal mathematical arguments to show that all reasoning of a certain formal kind are truth-preserving.

Jaeguon Kim objects to this, by making the following claim: the notion of justification and epistemic normativity is necessary to make sense of the very idea of beliefs. Someone believes that P iff an ideal interpreter would assign them the belief that P. And such an ideal interpreter assigns them beliefs, by interpreting their utterances in such a way as to jointly maximize a) the simplicity of the interpreter's theory and b) the degree to which (on the whole) the subject comes out to have beliefs that are justified. Thus, it doesn't make sense to study the reliability with which someone forms true beliefs, while rejecting the notion of epistemic normativity.

However, I don't buy that in order to understand the notion of belief we must accept some kind of analysis of it into other terms. You might think: we are just trained in the practice of interpretation, like we are trained to recognize certain things as games. We don't do this by consciously reasoning about justification, and Davidson's maxims or any other thing that one might use to try to define the notion of belief. Maybe there aren't any informative necessary and sufficient conditions for having a belief that P, or the only conditions are extremely complicated and will only be discovered after years of work by linguists.

If this is right, the argument `Unless the notion of justification is coherent, there will be no informative analysis of what it takes to count as having a given belief! Therefore, the notion of justification is coherent.' looks pretty unconvincing. Maybe the notion of belief is primitive.

Note, that saying there need not be necessary and suffcient conditions does not mean that there cannot be interesting cognitive science done, breaking down our capacity to recognize when a subject S counts as standing in the belief relation to a proposition P into various components.

Consider the following little fish. It has (among other things) four sensors a, b, c and d. Whenever a and b are touched it says "I feel xish", whenever c and d are touched it says "I feel y ish" and whenever a and c are touched it says "I feel z ish". There are no informative necessary and sufficient conditions to be given within the fishes' language (assuming this is all his vocabulary that relates to these four sensors). And yet there is nice simple relationship between these three claims at the level of sub-personal cognitive processing.

Thus, if the naturalized epsitemologist prefers to take "believes that" as a primitive (by not attempting to define it in any other terms) this doesn't suggest any kind of defeatism about the power of cognitive science to explain our complex linguistic capacities by appeal to simple systems.

But maybe, I didn't need to say any of the stuff in the last three paragraphs, because, (usually?), the proponents of epistemic normativity think its irreducible. So, I should think, it's no worse to take there to be primitive facts about the believing relation, than about epistemic normativity.

When is something a computer?

There HAS to be a great analysis of this, somewhere, but looking the usual places (stanford encyclopedia, first few google hits) turns up nothing. Here's my question:
When does it take for something to be a computer?

The obvious place to start is with Turing machines.

But, Turing machines are abstract objects (e.g. ordered 7-tupples), so being a computer doesn't mean being a Turing machine. However, we can define a function for each turing machine, which takes you from an statement of one state of Turing's imaginary infinite tape machine, to another, in accordance with the rules specified by the imaginary 7 tupple. And the natural thought is, that we should say something is a computer if it bears the same kind of relationship to the Turing-machine-qua-7-tupple, as the imaginary infinite tape machine does.

To be clear there are three things:
-7 tupple (the set that encodes the turing machine program)
-imaginary machine with infinite tape (whose behavior is completely determined by what it finds on its input tape plus the 7 tupple for its program above)
-physical system that counts as a computer.

And my initial proposal is this:

X is a computer running a program corresponding to Turing machine t iff there's some "correspondence" function f which pairs strings describing imaginary infinite tape machine states (or the numbers that code for this string) with (say) english sentences describing physical states of x, in such a way that whenever the computer is in a given state, a it goes into the state which the the turing program t says the infinite machine must go to whenever its in state f(a).

But this has two problems. The easy problem is that we want to allow that something can be a computer while, in some ways, not behaving like the imaginary turing machine: it can have only finite memory, or be disposed to break. Here we could just say something is more of a computer the more perfectly it realizes that, and admit (as is intuitively the case) that the notion is somewhat vague.

A somewhat harder problem is putty. Imagine that putty is extremely sensitive to inputs, so as it oozes along in a complicated way, it never goes back to an identical physical state, and it is always disposed to behave differently, depending on what input it gets (i.e. where you poke it). Putty shouldn't count as a computer, intuitively speaking. But we can define a function from strings describing the total state of the imaginary machine to suitably detailed sentences about the putty which satisfies the above. We just identify two different physical configurations of the putty as counting as "the same" putty state whenever the imaginary machine is supposed to be looping back to the same state, and not otherwise.

To fix this, I suggest that we require human usability. We should say: something only counts as a valid correspondence function, if humans can be trained to read off, based on suitable inspection of the state s of X, what description of the imaginary turing machine f(s) gives (without getting to watch its input, or know the partiular program being run). And, maybe we should also require that humans can easily and reliably cause the computer to go into sufficiently many of the possible input states.

But this doesn't quite solve the problem. For, what about putty with a firm face that stores its input program in a visible way, on the left side, and then just oozes on the right side? So we need to stipulate that people are able to do the above even, without coming to know the antecedently comming to know computer's input in any way.

Thus, the final definition is:

X is a computer to the extent that there's a correspondence function f, going from states of X to numbers coding states of the imaginary turing machine as above, and can be applied by humans without knowledge of the program X is supposed to be running or the input to X.

Can one intelligibly deny that one ought to form true beliefs?

It's sometimes said that you can't intelligibly deny that it's ceterus paribus valuable to have true beliefs (i.e. no one who doesn't accept this claim counts as having thoughts at all). The idea is, that in order to count as a thinker at all, you have to tend to have mostly true beliefs (or ones that are mostly reliable, truth preserving or justified). For Davidsonain reasons, we can't interpret people at all, unless we make them out to be mostly true/reliable/justified etc.

But I claim that actually, even if this Davidsonain idea is right (I think it probably is) it doesn't entail that someone can't coherently deny that having true beliefs is valuable.

For, all the Davidsonain considerations mentioned above would seem to require is that a person DOES modify their belief in a way that tends towards truths/matches up with what they would be justified in believing etc. They don't have to BELIEVE it would be GOOD to so modify their beliefs. Thus, I claim, a philosopher who denied that there was such a thing as epistemic normativity, would count denying that it is ceterus paribus valuable to have true beliefs (they don't think anything is valuable). (On the other hand, it is probably not possible for someone to deny that its ceterus paribus good to have true beliefs, but think something else would be ceterus paribus good).

What makes a crucial difference here, is the difference between thinking about whether Obama is a good president and thinking about whether one has reason to think that Obama is a good president. Colloquially we often use the expressions interchangably. But, in fact, in normal deliberation I don't entertain any propositions about what I should believe. I don't think about reasons, or beliefs. I just think about Obama, and then to form beliefs mostly in cases where (in fact) it is reasonable to form such a belief.

All we need to translate someone as having beliefs, is for them to tend revise these beliefs in cases where they should revise them, NOT for them to have any beliefs to the effect that they should revise them. The philosopher who denies epistemic normativity is an example of how you can have one without the other.

Fully general implies recursively axiomatizable?

I keep reading things in the history of phil math which seem to assume that:
'Theory t is fully general' implies 'Theory t is logical' implies 'Theory t recursively axiomatizable'.

e.g. Frege thought that math might be a matter of logic, i.e. fully general principles of reasoning. But then we discovered that his particular axiomatization didn't work, and we learned from godel that no recursively enumerable axiomatization could capture all mathematical truths. Hence, we learned that math is not a matter of pure logic, and that it is not fully general, but rather contains subject-matter specific truths.

But why are we assuming that being "fully general" implies being recursively axiomatizable? Can anyone else see an argument for this claim?

[I also feel like there's a tendency to assume 'fully general' implies 'logical' implies 'epistemically unproblematic'. But, e.g. second order logic would seem to be fully general while still being epistemically problematic. So, again, I'd like to know what argument there might be for this.

Indeed, its not clear to me what, if any, entailment relations between:
1. is logical 2. is recursively axiomatizable 3. is epistemically unproblematic 4. must be accepted by any thinker 5. is fully general 6. is true in virtue of meaning 7. can be known merely be reflecting on meaning 8. is epistemically unproblematic.]