Saturday, December 25, 2010

Three Arguments for A Priori Knowledge of (Very) Contingent Facts

Can we have a priori knowledge of contingent facts? For example, consider the proposition below. Can we know truths like the following a priori? NOT PEA SOUP: 'It is not the case that everything outside of a 5 foot radius around me is made of pea soup, which stealthily forms up into suitable objects as I walk by' Here are three positive arguments (in ascending order of strength IMO) for the conclusion that we can know NOT PEA SOUP a priori.

1. Argument from Crude Reliablism

The belief-forming method of assuming that you aren't in a pea soup world is reliable. And even if we make things a little less crude by saying that good belief formation is belief formation that works via a chain of methods which are *individuated in a psychologically natural way* and are reliable, we will probably still get the same conclusion. For plausibly the most natural relevant psychological mechanism involved in generating that belief would be something like, 'believe not P when P is sufficiently gerrymandered'.

2. Argument from Probability and Conditionalization-Based Models of Good Inference

If you think that good reasoning is well modeled by the idea of assigning a certain probability measure to the space of possible worlds, and then ruling out worlds based on your observation, and asserting that P if and only if a sufficient fraction of the remaining probability is assigned to worlds in which P. There will be some propositions P that low enough prior probability to warrant asserting ~P before you have made any observations - and plausibly the pea soup hypothesis is one of them. Presumably in such cases your justification does not depend on experience. [I think Williamson has something like this in mind in one of his papers on skepticism, but his argument was more complicated]

3. Argument from Current Knowledge plus Inability to Cite Experiential Justification. The claim that NOT PEA SOUP is a priori follows from a claim about knowledge that only a skeptic would deny, plus a somewhat intuitive claim about the relationship between a priority and justification. The intuitive claim I have in mind is that if someone can count as knowing that P, without being able to point to any relevant experience (or memory of experience, or reason to believe that they had experience etc) as justification then they know that P a priori so P is a priori (i.e. a priori knowable). Everyone but the skeptic agrees that people know that they aren't in the pea-soup world. These people who know cannot point to any experience as justification. Hence, 'not-pea soup' must be knowable without appeal to experience for justification. You might try to defend the a posteriority of NOT PEA SOUP by saying that even if the man on the street can't make any argument from experience to NOT PEA SOUP, our intuition that people know that NOT PEA SOUP is based on the assumption that there exists some good argument from something about experience to NOT PEA SOUP, and philosophers just need to discover it. In this way, experience really is necessary to justify the belief that NOT PEA SOUP so the proposition is a posteriori.

However, this response threatens to generate the unattractive conclusion that people today do not know NOT PEA SOUP. For, in general, the mere existence of a good argument for some proposition that I believe does not suffice to make me justified in believing that proposition now, if I cannot (now) give that argument. If I believe some mathematical theorem T on a hunch or on the basis of tea leaf reading, the mere fact that there is a good argument for T on the basis of things that I accept, doesn't suffice to allow me to count as knowing that T. So even if there is some cunning philosophical argument yet to be discovered which justifies NOT PEA SOUP on the basis of experience, it would seem that this argument cannot suffice to justifies people now accepting that NOT PEA SOUP. If people now are justified that NOT PEA SOUP, and can give no argument from experience for this claim, it must be that the claim can be justifiably believed without appeal to experience.

Sunday, December 19, 2010

Reliablism and the Value of Justification: The Angel's Offer

A major objection to reliablism about justification is that it doesn't explain why we value having knowledge of a given proposition more than mere dogmatic true belief. For, believing a true proposition via a method that's reliable is just more likely to lead you to believe other true propositions; there's no obvious sense in which your relationship to true beliefs formed by reliable methods is thereby intrinsically any better or more valuable than you relation to mere true beliefs. If we don't like a particular cup of good expresso any better for it being the product of a machine that reliably makes good expresso, why should we like a particular state of believing a truth any better from the fact that it was produced by processes that reliably lead to believing the truth?

But maybe we DON'T value having the special relation we do to justified true beliefs over and above it's tendency to promote having stable true beliefs. Consider this thought experiment:

An angel convinces you that he knows the true laws of physics and maybe also that it can do super-tasks and thereby knows certain statements of number theory which cannot be proved from axioms which you currently accept. The angel offers to make it the case that you find these true principles feel obvious to you - the way that you now feel about 'I exist' or '2+2=4'. He will wipe your memory of this conversation so that you will not be able justify these feelings to yourself by appeal to the reliable way you got them - but of course you won't feel the need to justify them to yourself since they will just feel obvious and you will be inclined to immediately accept them. [Suppose also, if it matters, the angel will do the same to everyone in your community, that community members prefer to go along with whatever choice you make, that the angel is already going to blur your memories of not finding these claims obvious in the past etc.]
Would you accept the offer?

I personally would definitely take the offer. And I think many people would share this preference. If there were something intrinsically valuable about knowing verses merely dogmatically assuming a necessary truth, then this would be a strong reason not to take the angels offer. But if Plato is right (as thinking about the example tempts me to think that he is) to say that the only bad thing about dogmatically assuming truths rather than knowing them is that dogmatic assumptions don't stay tied down, then the angel's offer to make you and everyone else in your community find these truths indubitable fixes that problem - and you should take him up on his offer.

Friday, December 17, 2010

Dilemma re: (Platonist) Structuralism

I've got to go reread my Shapiro. But before his smooth writing bewitches me, let me note down the very simple objection that I am currently unable to see how he would answer.

Structuralism is traditionally motivated by the desires to address a problem from Benacerraf: that there are multiple equally good ways of interpreting talk of numbers as referring to sets, so that either answer to "what set is the number 3" seems unprincipled. But now:

If you are not OK with plentiful abstract objects, you can't believe there are abstracta called structures.

If you are OK with plentiful abstract objects, then you can address this worry by just saying that the numbers and sets are different items. Certain mathematics textbooks find it useful to speak as though 3 were literally identical to some set, but this is just a kind of "abuse of notation" motivated by the fact that we can see in advance that any facts about the numbers will carry over in a suitable way to facts about the relevant collection of sets named in honor of those numbers. One might argue that analogous abuse of notation happens all the time in math e.g. writing a function that applies to Fs where you really mean the corresponding function that applies to equivalence classes of the Fs. This route seems like a much less radical move than claiming that basic laws about identity fail to apply to positions in a structure e.g. there is no fact of the matter about whether positions in two distinct structures (like the numbers and the sets) are identical.

Tuesday, December 14, 2010

Cog Sci Question

[edited after helpful conversation with OE where I realize my original formulation of the worries was very unclear]

I was just looking at this cool MIT grad student's website, and thinking about the project of a) reducing other kinds of reasoning to Baysean inference and b) modeling what the brain does when we reason on other ways in terms of such conditionalization.

This sounds pretty good, but now I want to know:

a) What is a good model for how the brain might do the conditionalization? Specifically: how could it store all the information about the priors? If you picture this conditionalization in terms of a space of possible worlds, with prior probability spread over it like jelly to various depths, it is very hard to imagine how *that* could be translated to something realizable in the brain. It seems like there wouldn't be enough space in the brain to store separate assignments of prior probabilities for each maximally specific description of a state of the world (even assuming that there is a maximum "fineness of grain" to theories which we can consider, so that the number of such descriptions would be finite).

b) How do people square basing everything on Baysian conditionalization with psychological results about people being terrible at dealing with probabilities consciously?

Google turns up some very general results that look relevant but if any of you know something about this topic and can recommend a particular model/explain how it deals with these issues...