Saturday, December 25, 2010

Three Arguments for A Priori Knowledge of (Very) Contingent Facts

Can we have a priori knowledge of contingent facts? For example, consider the proposition below. Can we know truths like the following a priori? NOT PEA SOUP: 'It is not the case that everything outside of a 5 foot radius around me is made of pea soup, which stealthily forms up into suitable objects as I walk by' Here are three positive arguments (in ascending order of strength IMO) for the conclusion that we can know NOT PEA SOUP a priori.

1. Argument from Crude Reliablism

The belief-forming method of assuming that you aren't in a pea soup world is reliable. And even if we make things a little less crude by saying that good belief formation is belief formation that works via a chain of methods which are *individuated in a psychologically natural way* and are reliable, we will probably still get the same conclusion. For plausibly the most natural relevant psychological mechanism involved in generating that belief would be something like, 'believe not P when P is sufficiently gerrymandered'.

2. Argument from Probability and Conditionalization-Based Models of Good Inference

If you think that good reasoning is well modeled by the idea of assigning a certain probability measure to the space of possible worlds, and then ruling out worlds based on your observation, and asserting that P if and only if a sufficient fraction of the remaining probability is assigned to worlds in which P. There will be some propositions P that low enough prior probability to warrant asserting ~P before you have made any observations - and plausibly the pea soup hypothesis is one of them. Presumably in such cases your justification does not depend on experience. [I think Williamson has something like this in mind in one of his papers on skepticism, but his argument was more complicated]

3. Argument from Current Knowledge plus Inability to Cite Experiential Justification. The claim that NOT PEA SOUP is a priori follows from a claim about knowledge that only a skeptic would deny, plus a somewhat intuitive claim about the relationship between a priority and justification. The intuitive claim I have in mind is that if someone can count as knowing that P, without being able to point to any relevant experience (or memory of experience, or reason to believe that they had experience etc) as justification then they know that P a priori so P is a priori (i.e. a priori knowable). Everyone but the skeptic agrees that people know that they aren't in the pea-soup world. These people who know cannot point to any experience as justification. Hence, 'not-pea soup' must be knowable without appeal to experience for justification. You might try to defend the a posteriority of NOT PEA SOUP by saying that even if the man on the street can't make any argument from experience to NOT PEA SOUP, our intuition that people know that NOT PEA SOUP is based on the assumption that there exists some good argument from something about experience to NOT PEA SOUP, and philosophers just need to discover it. In this way, experience really is necessary to justify the belief that NOT PEA SOUP so the proposition is a posteriori.

However, this response threatens to generate the unattractive conclusion that people today do not know NOT PEA SOUP. For, in general, the mere existence of a good argument for some proposition that I believe does not suffice to make me justified in believing that proposition now, if I cannot (now) give that argument. If I believe some mathematical theorem T on a hunch or on the basis of tea leaf reading, the mere fact that there is a good argument for T on the basis of things that I accept, doesn't suffice to allow me to count as knowing that T. So even if there is some cunning philosophical argument yet to be discovered which justifies NOT PEA SOUP on the basis of experience, it would seem that this argument cannot suffice to justifies people now accepting that NOT PEA SOUP. If people now are justified that NOT PEA SOUP, and can give no argument from experience for this claim, it must be that the claim can be justifiably believed without appeal to experience.

Sunday, December 19, 2010

Reliablism and the Value of Justification: The Angel's Offer

A major objection to reliablism about justification is that it doesn't explain why we value having knowledge of a given proposition more than mere dogmatic true belief. For, believing a true proposition via a method that's reliable is just more likely to lead you to believe other true propositions; there's no obvious sense in which your relationship to true beliefs formed by reliable methods is thereby intrinsically any better or more valuable than you relation to mere true beliefs. If we don't like a particular cup of good expresso any better for it being the product of a machine that reliably makes good expresso, why should we like a particular state of believing a truth any better from the fact that it was produced by processes that reliably lead to believing the truth?

But maybe we DON'T value having the special relation we do to justified true beliefs over and above it's tendency to promote having stable true beliefs. Consider this thought experiment:

An angel convinces you that he knows the true laws of physics and maybe also that it can do super-tasks and thereby knows certain statements of number theory which cannot be proved from axioms which you currently accept. The angel offers to make it the case that you find these true principles feel obvious to you - the way that you now feel about 'I exist' or '2+2=4'. He will wipe your memory of this conversation so that you will not be able justify these feelings to yourself by appeal to the reliable way you got them - but of course you won't feel the need to justify them to yourself since they will just feel obvious and you will be inclined to immediately accept them. [Suppose also, if it matters, the angel will do the same to everyone in your community, that community members prefer to go along with whatever choice you make, that the angel is already going to blur your memories of not finding these claims obvious in the past etc.]
Would you accept the offer?

I personally would definitely take the offer. And I think many people would share this preference. If there were something intrinsically valuable about knowing verses merely dogmatically assuming a necessary truth, then this would be a strong reason not to take the angels offer. But if Plato is right (as thinking about the example tempts me to think that he is) to say that the only bad thing about dogmatically assuming truths rather than knowing them is that dogmatic assumptions don't stay tied down, then the angel's offer to make you and everyone else in your community find these truths indubitable fixes that problem - and you should take him up on his offer.

Friday, December 17, 2010

Dilemma re: (Platonist) Structuralism

I've got to go reread my Shapiro. But before his smooth writing bewitches me, let me note down the very simple objection that I am currently unable to see how he would answer.

Structuralism is traditionally motivated by the desires to address a problem from Benacerraf: that there are multiple equally good ways of interpreting talk of numbers as referring to sets, so that either answer to "what set is the number 3" seems unprincipled. But now:

If you are not OK with plentiful abstract objects, you can't believe there are abstracta called structures.

If you are OK with plentiful abstract objects, then you can address this worry by just saying that the numbers and sets are different items. Certain mathematics textbooks find it useful to speak as though 3 were literally identical to some set, but this is just a kind of "abuse of notation" motivated by the fact that we can see in advance that any facts about the numbers will carry over in a suitable way to facts about the relevant collection of sets named in honor of those numbers. One might argue that analogous abuse of notation happens all the time in math e.g. writing a function that applies to Fs where you really mean the corresponding function that applies to equivalence classes of the Fs. This route seems like a much less radical move than claiming that basic laws about identity fail to apply to positions in a structure e.g. there is no fact of the matter about whether positions in two distinct structures (like the numbers and the sets) are identical.

Tuesday, December 14, 2010

Cog Sci Question

[edited after helpful conversation with OE where I realize my original formulation of the worries was very unclear]

I was just looking at this cool MIT grad student's website, and thinking about the project of a) reducing other kinds of reasoning to Baysean inference and b) modeling what the brain does when we reason on other ways in terms of such conditionalization.

This sounds pretty good, but now I want to know:

a) What is a good model for how the brain might do the conditionalization? Specifically: how could it store all the information about the priors? If you picture this conditionalization in terms of a space of possible worlds, with prior probability spread over it like jelly to various depths, it is very hard to imagine how *that* could be translated to something realizable in the brain. It seems like there wouldn't be enough space in the brain to store separate assignments of prior probabilities for each maximally specific description of a state of the world (even assuming that there is a maximum "fineness of grain" to theories which we can consider, so that the number of such descriptions would be finite).

b) How do people square basing everything on Baysian conditionalization with psychological results about people being terrible at dealing with probabilities consciously?

Google turns up some very general results that look relevant but if any of you know something about this topic and can recommend a particular model/explain how it deals with these issues...

Wednesday, November 24, 2010

Putnam Indeterminacy Dilemma

Putnam uses Skolem's theorem (every consistent first-order theory has a model whose domain is the integers or some subset thereof) to argue that the meanings of our sentences are indeterminate.

If considerations of elegance CAN make something a more natural candidate for the meaning of a given word (e.g. someone with behavior that doesn't distinguish between plus and quus means plus), then the mere existence of some (clumsly and arbitrary) Skolem model doesn't pose a problem for our meaning something definite - since the Skolem model's interpretation of expressions like "all possible subsets" will be much less elegant than the natural one.

If considerations of elegance CAN'T make something a more natural candidate for the meaning of a given word, then Putnam is wrong to assume that even the meanings of the first order logical connectives which his perverse Skolem model captures are pinned down. For why think that we mean 'or' rather than a quus like version of 'or' that starts behaving like `and' in sentences longer than a billion words long?

Sunday, November 21, 2010

Old Evidence and Apologies

If the problem of old evidence for Bayesian epistemology is just the following, then I don't think it's a problem:

Sometimes it seems like we should change our probabilities based on discovering logical consequences of a theory, but Bayesian updating only involves changing probabilities when you make a new observation.

For (it seems to me) this objection has the same ultimate structure as the following, surely bad, objection:

Sometimes it seems like we should apologize, but obeying so-and-so's moral theory involves never wronging anyone - and hence never apologizing.

If old evidence E is logically incompatible with hypothesis H, then Bayesianism says that you should *already* have ruled out all the worlds where H is true, and changed your probabilities accordingly, whenever you observed that E. So, I see no problem for the Bayesian epistemologist in saying that when you discover that you have failed to update in the way required by the theory (by not noticing a logical incompatibility), you should fix the mistake and change your probabilities accordingly.

[Compare this with the following popular intuition in ethics: you should promise to visit your grandmother and then visit her, but given that you aren't going to visit you shouldn't promise to visit her.]

Friday, October 15, 2010

Obvious vs. embarassing mistakes

As you've probably noticed, this blog has been on a bit of a hiatus. I'm going on the jobmarket this year so things have been very busy. I do have a little time now though, to note something about the relationship between two phenomena that are ubiquitous in my life :)

Not all obvious mistakes are embarrassing mistakes. Any mistake you make while adding two numbers will be an obvious mistake, but nearly everyone doing calculations makes such mistakes some fair fraction of the time, and these errors are not (intuitively) embarrassing mistakes.

further questions:
-Is being obvious once pointed out a necessary condition for being an embarrassing mistake? [edit: appropriately enough, i had originally put "sufficient" :)]
-Is the mere fact that a mistake is made with high frequency in some community sufficient to prevent it from being an embarrassing mistake? (maybe inferring the consequent is made with high frequency yet also embarrassing).
-Will trying and failing to give principled necessary and sufficient conditions for a mistake being embarrassing make one feel less embarrassed by embarrassing mistakes?

[hat tip to E.M. for suggesting this would make a cute post]

Monday, August 16, 2010

On Moral Philosophers' Library Fines

I just listened to this neat conversation, which summarizes some empirical research into the question of whether thinking about moral philosophy makes you any better at behaving morally. It turns out moral philosophers are actually slightly more likely to steal library books than philosophers in other areas, and political philosophers are no more likely to vote than people in other profession.

The speakers mention that these results are surprising, since they conflict with the hope that researching moral philosophy will have morally good effects.

Now I'm pretty skeptical about moral philosophy myself, for other reasons, but here's what the moral philosophers would/could say for themselves on this score:

"The phenomenon of weakness of the will, means that there are two components to doing what's good: the epistemic component of figuring out what's morally better/required in a given case, and then the practical component of actually doing that.

Moral philosophy only pretends to address the first component. Thinking hard about weird trolley cases, and abstract moral principles helps you figure out what you ought to do in cases where this is unclear. It doesn't address the second component of acting well - working up the will power to actually do what you ought to.

In this way, moral philosophers are like scientists who study fistfights not professonal boxers. They spend a long time studying the differences between different principles that only make a difference to what one should in principle do in certain rare cases. They don't spend this time practicing up their personal ability to implement the overall art of fighting well.

For this reason, testing whether moral philosophers are more virtuous in cases where it's *obvious*/uncontroversial what's virtuous (you should return library books, you should vote) exactly fail to capture the benefits that doing moral philosophy brings. Studying moral philosophy helps society make the world better, because the moral philosophers work out what we should do in novel, or controversial cases. This doesn't mean that it makes moral philosophers themselves substantially more virtuous. For, in most of the cases where ordinary people have a chance to act badly (adultery, embezzlement, falsifying data, refusing charitable aid) the limiting factor isn't *figuring out* what the right thing to do is, but rather summoning the willpower to sacrifice individual pleasure and benefit to do whats right."

Thursday, August 5, 2010

Maybe this was obvious to everyone else

If Fodor thinks that elements in the language of thought get their meaning from counterfactuals about assymetric dependence (HORSE means horse, not horse-or-cow-on-a-dark-night, because if tokenings of HORSE hadn't tracked horses they wouldn't have tended to track horses-or-cows-on-a-dark-night either), what does he say about Swampman?

Since Swampman is supposed to have come into being from random electrical activity, none of these counterfactuals about different response patterns which Swampman could have had seem well defined. Does Fodor say that Swampman wouldn't be thinking?

I guess Davidson (who came up with the example) bites this bullet. But it seems like the exact kind of intuitions that motivate accepting mental representation in the first place (you could have just the same phenomenology, if you were paralyzed so you had no dispositions to use any external language; surely this should suffice for you to count as having thoughts) rebel at the idea that Swampman wouldn't be thinking.

Saturday, July 24, 2010

Invention, Discovery and Creativity in Mathematics

Non-philosophers I meet sometimes ask: do I think mathematical facts are invented or discovered? IMO, this is a weird question - and not one that comes up much in the phil math literature- because the contrast between invention and discovery is not very well defined. For example, did Alexander Gram Bell *invent* the telephone, or did he *discover* that putting components together in a certain way would build a telephone? Intuitively, one might say both.

Maybe what people mean to be asking by this question is just this: do mathematicians bring new mathematical objects into existence, or do they discover already existing objects? For, paradigmatic cases of invention typically do involve creating a new physical object, while paradigmatic cases of discovery involves visiting an already existing physical object. So e.g. Columbus discovered America (because it already existed and he went to visit it) whereas Bell invented the telephone, by physically creating the first prototype.

However, the contrast between invention and discovery can't really just track the distinction between cases where a new object is made vs. not. This is because making a new thing isn't required for invention *or* discovery. Consider an imaginary scenario where Bell just thought up a plan for a telephone, and told someone else who physically constructed the first one years later. Bell would still have invented at telephone, if he though up the plan and then worked out from known principles that the plan would work, but never made one.

While we are talking about invention and discovery, I think there's a third notion -artistic creation (e.g. what happens when someone composes a story or a poem)- which bears an interesting relationship to mathematical discovery. When a writer writes a story, they are putting down a sequences of sentences which already exists as an abstract object.

I mean, suppose that the story teller composes a story today. If a linguist said yesterday 'no intelligible sequence of English sentences has property P', the and the sequence or sentence which the story teller writes down today has property P, then then the linguist's claim yesterday was false. The domain of potential counterexamples to linguistics claims today, already contains all sequences of English sentences which literary ingenuity could ever devise. Note also that to compose a story or poem doesn't require writing it down anywhere, (the person in the Borges story who has time stop so he can finish writing a poem before he gets shot still counts as creating the poem). For this reason the task of literary "creation" doesn't really seem to involve creating anything, (neither a physical artifact, nor an abstract string of sentences), but rather directing your attention to an abstract object that already exists - carefully sorting out which string of sentences will combine certain varied and subtle properties in the right way.

Now, if I'm right about this- the creativity of a poet or novelist doesn't need to involve creating any new object, but rather amounts to discovering a pre-existing string of sentences which has a certain property - this suggests a potential confusion about the relationship between mathematical creativity and ontology. Arguably, mathematical creativity is much like literary creativity. But, if mathematical creativity is like literary creativity, it does not follow from this that the mathematician creates the mathematical objects he describes, or that he creates anything else. For (if the above is right) literary creativity isn't a matter of bringing new objects into being, but rather a matter of discovering, amid the combinatorial explosion of possible sequences of English sentences, one that has a certain special features.

Why Math and Morals Aren't Companions in Guilt

Intuitively, many people feel that epistemic worries about moral facts (if there are moral facts, how to explain why our moral intuitions should be even even remotely correct about them?) are WAY more serious than epistemic worries about mathematical facts (if there are mathematical facts, how to explain why our mathematical intuitions should be even even remotely correct about them?). But is there really a difference here?

Well, here's one thing that I think does make a difference: mathematical claims about number theory have direct and specific consequences for stuff that we can check by logic and/or scientific observation.

-what will happens whenever a person or a computer to successfully applies a certain syntactic alogorithm
-how many apples-or-oranges do you have when you have n apples and m oranges (cf Frege for why this is a logical fact)

This matters because, plausibly, the need to get these concrete applications right likely prevents our beliefs about number theory from getting too off the wall - whereas, our moral intuitions have no such multitude of consequences which are directly checkable by logic and observation.

Saturday, July 17, 2010

Epistemology verses Foundations in Philosophy of Math

The epistemology of math task: Get a true theory of what under what circumstances a person counts as knowing something. Or, at least, square our beliefs about what people have or lack knowledge of what particular mathematical beliefs, with general beliefs about what’s required for knowledge (e.g. causal contact.

The foundations of math task: extend our mathematical knowledge.

I claim that making this distinction matters a lot, because:

Arguments that are helpful for foundations of math are (in themselves) useless for the epistemology task. Suppose we have a working derivation D of certain facts of arithmetic from logic. And suppose we have a perfectly adequate, intuition-matching story about what it takes to count as knowing the relevant logical facts.
This still does not allow us to account for current knowledge of arithmetic (i.e. reconcile our theory of knowledge with the intuition that people now know things about arithmetic). This is because - in general - it is not enough for S to know that P, for P to be true, S to believe that P, and P to be derive*able* from things which S knows. In general, the subject S needs to have some kind of access to the derivation. The mere fact that I believe that P, and P can be proved from other things that I know, hardly suffices to establish that what I have counts as knowledge. If a lawyer is asked to show that some contractor knew that a bridge was safe, it doesn’t suffice to show that one *could* derive from laws of physics and facts about the blueprint which the contractor knew that the bridge was safe - we also need to suppose that the contractor did derive it, or get testimony from someone who derived it or the like.

Hence, a foundational argument which derives (say) one body of mathematics from premises that are more certain is not directly relevant to the general epistemological project.

Conversely, an accurate epistemology of mathematics can be almost perfectly useless to the task of setting some shaky region of mathematical theory on firmer foundations. For example, one classic account of knowledge is reliablism. If we modify reliablism so as to apply non-trivially to mathematics (following suggestions by Linnebo and Field) we get the idea that someone has knowledge if they have a true belief which is reliable in the sense that: they accept a sentence which expresses p, and if that sentence had not expressed a truth, they would not have accepted it. This is a perfectly decent candidate for a general account of mathematical knowledge. But note that, even supposing that it is right, it does nothing to help satisfy foundational desires for, say, more secure foundation for the axiom of choice. If someone has foundational worries about the axiom of choice, they have worries about whether it is true. They might express these worries by saying ‘how do you know that the axiom of choice holds?’ but the emphasis here is on truth, not on knowledge. It would be silly to respond by saying that we know AC because AC is true, and we have reliable beliefs (as defined above) to that effect. What the foundation-seeker really wants is to know whether AC. They want to acquire knowledge about whether AC, not get a general theory of what it would take to count as knowing AC.

So, I have been trying to argue that it’s important to make a distinction between the epistemological project of trying to come up with a general theory of when someone knows something about math, and the foundational project of trying to make it the case that we know more things about math, by supplementing inadequate arguments with additional arguments that appeal to premises which are already known. The one focuses on the most bland an uncontroversial cases of mathematical knowledge, and tries to reconcile our other beliefs about the nature of knowledge with our particular judgments about this case. The other seeks out the most controversial regions of mathematical claims, and seeks to secure knowledge for us about these claims, by connecting them to claims that are more securely known. Enticing answers to one project can easily seem to frustratingly miss the point for someone who is interested in the other, as shown in the examples above. Hence it’s important to make the distinction.

However, this is not to say that there’s no relationship between the epistemological and foundational projects. Thinking about big picture issues about justification in general, can influence your judgments about particular cases. A kind of trivial example of this is intuitions about what you can take for granted, while still counting as being justified. Just off the top of one’s head, it can seem attractive to say that someone doesn’t count as knowing that P if all they can give is a circular justification for P, an infinite regress of justifications, or a justification that comes to a halt at a certain point. But when you consider these three options together and notice that they exhaust all the possibilities, you will likely be inclined to give up the principle that someone who can only give a justification of one of these kinds must thereby not count as having knowledge. So, if two realists about AC are attempting to provide and evaluate firmer foundations for AC, it may be helpful for them to general questions about what’s required for knowledge and justification – to make sure that their evaluation of the evidence in this case, doesn’t depend on assumptions about justification which turn out to be incoherent or conflict with what they take to be sufficient evidence more generally.

Wednesday, July 7, 2010

Are mathematical truths "substantive"?

One thing that that has caused me great puzzlement (in the past few years), is the question of whether math tells us anything 'substantive'. I want to suggest that our intuitive notion of "substantiveness" combines two distinct notions, which come apart in this case.

- mathematical truths DONT rule out any physically or even metaphysically possible states of the world. (This is just another way of putting the truism that mathematical truths are necessary, hence compatible with every metaphysically possible world. I like putting things this way, because it doesn't suggest that necessary mathematical truths arise from something (mathematical objects?) causally blocking any person that tries to being both more than three feet long and less than two feet long)

- mathematical truths DO combine with our background beliefs to lead us to form expectations we wouldn't have formed otherwise (e,g. about the results of future counting procedures, about the programs)

Presumably you admit that these are at least nominally different properties. But you might still wonder *how* these two things could come apart. How could knowing any proposition be useful, if this proposition didn't rule out any possible states of the world? Here's what I think the answer to that is in a nutshell:

Some mathematical facts (i.e. facts which are derivable from math and logic alone) which are useful because they tell us that whenever one description of the world holds, then so does another (e.g. anything that accelerates from standstill at this rate for this amount of time travels that distance, anything that's less than two feet long isn't three feed long.)

And here's the answer in more detail.

Monday, June 28, 2010

FOL as the language for science

Maybe I'm missing something here...

Quine suggests that we adopt first order logic as the language for science. But, first order logic can't capture the notion of 'finitely many Fs'. It can only express the claim that there are n Fs for some particular n. Yet, we do understand the notion of finite, and use it in reasoning (e.g. if there are finitely many people at Alice's party, there is one person such that no one is taller than him) and potentially in science. Hence, we should not adopt first order logic as the language for science.

[The standard way to try to get around this, is by talking about relations to abstract objects like the numbers (There are finitely many Fs if there's a 1-1 map from the set of things that are F to the some set theoretic surrogate for the numbers). This would give you the right extension, if your scientific hypothesis could say that something had the structure of the numbers. But first order logic can only state axioms, like PA which don't completely pin down the structure of the numbers. Any first order axioms which you use to characterize the numbers will have non-standard models. This is Putnam's point in his celebrated model theoretic argument against realism. So, if you take this strategy, rather than saying that there are finitely many people at Alice's party, you can only say that the number of people is equinumerous items that satisfy a certain collection of first order axioms. And this does not rule out non-standard models.]

Is Math Logic?

Is mathematics just a branch of logic? This is the first question many people ask about philosophy of math (sometimes with a vague idea that a) it would solve some kind of metaphysical or epistemological problems if math were logic or b) it's been proved that math isn't logic). Well, unsurprisingly, the answer depends on what you mean by 'logic'. Here are some different senses of the word 'logic' that one might have in mind.

1. first order logic
2. fully general principles of good reasoning
3. a collection of fully general principles which a person could in principle learn all of, and apply
4. principles of good reasoning that aren't ontologically committal
5. principles of good reasoning that no sane person could doubt

The sense in which it has been proved that math isn't logic is (to put things as briefly as possible) this: You can't program a computer to spit out all and only the truths of number theory.

This fact directly tells us that the mathematical truths are not all logical truths, if we understand "logic" in sense #1 - since we *can* program a computer to list off all the truths of first order logic. And it also tells us that the mathematical truths aren't all logical truths in sense #3 or #5 either - if we are willing to make the plausible assumption that human reasoning can be well modeled in this respect by some computer program. For if all human reasoning can be captured by a program, then so can all human reasoning from some starting finite collection of humanly applicable principles, and so can the portion of human reasoning that no sane person could doubt (to the extent that this is well defined).

However, if by "logic" you just mean #2 -fully general principles of reasoning that would be generally valid (whether or not one could pack all of these principles into some finite human brian)- then we have no reason to think that math isn't logic. We expect the kinds of logical and inductive reasoning we use in number theory (e.g. mathematical induction) to work for other things (especially for things like time, which we take to have the same structure as the numbers). If Jim didn't have a bike on day 1, and if, for each subsequent day he could only get a bike if he had already had a bike on the previous day, then Jim never gets a bike. If there are finitely many people at Jane's party, there is one person such that no one is taller than them. The laws of addition are the same whether you are counting gingerbread men and lemon bars, or primes and composite numbers. And this doesn't just apply to principles of mathematical reasoning which we actually accept. We also expect any *unknown* truths about the numbers (as the smallest collection containing 0 and closed under a transitive, antisymmetric relation like successor) to be mirrored by corresponding truths about any other collection of objects which contain some other starter element and are as few as possible while being closed under a transitive, antisymmetric relation (be this a collection of infinitely many rocks, or a collection of some other abstracta like the range of possible strings containing only the letter "A"). Hence, it is plausible that every sentence about numbers is an instance of a generally valid sentence form containing only worlds like "smallest", "collection", "antisymmetric" "finite" etc - and every mathematical truth is a logical truth in this regard.

Finally, if by "logic" you mean #4- ontologically *committal* good reasoning, the answer depends on a deep question in meta-ontology. For, it is well known that standard mathematics can be reduced to set theory, which in turn can be reduced to second order logic. But what are the ontological commitments of second order logic?

People have very different intuitions about whether we should say that there really are objects (call them sets with ur-elements or classes) corresponding to "EX" statements in second order logic. Does the claim that "Some of the people Jane invited to her party admire only each other, so if all and only these people accept, she will have a very smug party" assert the existence of objects called collections? More generally: the quantification over classes in second-order logic ontologically committal? Statements like the one above certainly seem to be meaningful. And, it turns out not to be possible to paraphrase away the mention of something like a set or class, in the sentence above, using only the tools of standard first order logic. This reveals a sense in which we treat reasoning about abstracta like classes (or, equivalently for these purposes, sets with ur-elements), very similarly to ordinary objects in our logical reasoning about them. But is this enough to show that second order logical is ontologically committal (and hence not logic at all, according to meaning #4)?

I propose that the key issue here concerns how closely ontology is tied to inferential role. Both advocates and deniers of abstract objects will agree that many of the same syntactic patterns of inference that are good for sentences containing "donkey" and sentences containing "set". But what exactly does this tell us about ontology? If you think about ontological questions as being questions about what the logical role of an expression in a given language, this tells you something very decisive. On the other hand, if you think about ontology can swing somewhat free of the inferential roles of sentences in languages (so an expression can have an object-like inferential role without naming an object), it's open to you in principle to say that - however similar their logical role- second order quantifiers are not ontologically committal. On this view, claims about sets with ur-elements are just ways to make very sophisticated claims (generally claims that could not otherwise be finitely expressed) "about" the behavior and relationship between ur-elements, and true claims about pure sets (i.e. sets that can be built up just from the empty set) are true in a way that does not involve any particular relationship to any objects, but can illuminate the necessary relationships between different expressions about classes that do have ur-elements. [At the moment I prefer the former view, that quantification in second order logic is ontologically commital, but this is a subtle issue]

Thus, to summarize, it is fully possible to say - even after Godel- that math is the study of "logic" in the sense of generally valid patterns of reasoning. However, if you say this, you must then admit that "logic" is not finitely axiomatizable, and there are logical truths which are not provable from the obvious via obvious steps (indeed, plausibly ones which we can never know about). Note that to make this claim one need not give up on the idea that logical arguments proceed from the obvious via obvious steps. For, if you take this route you can (and probably will want to) distinguish the human practice of giving logical arguments, from the collection of logical truths. You can say: only some of the logical truths seem obvious to us, and only some of the logically-truth-preserving inferences seem obviously compelling to us. We make logical arguments by putting these inferences together to get new results which are also logical truths. But (what Incompleteness shows) is that not all logical truths can be gotten from the ones that we know about. You can even claim that mathematical truths are logical in the further sense of not being ontologically committal, if you allow (contrary to the usual close association between objecthood and logical role) that the set quantifiers in second order logic are not ontologically committal.

Friday, June 18, 2010

Knowledge and Cannonical Mechanisms

In my first epistemology class in college, the prof encouraged us to look for adequate necessary and sufficient conditions for knowledge by making the following (imo appealing) argument. We expect that there's SOME nice relationship between facts about knowledge and descriptive facts not containing the word knowledge, since our brains seem to be able to go, somehow, from descriptions of a scenario (like the Gettier cases) to claims about whether the person in that scenario has knowledge. However, philosophical attempts to find a nice definition of knowledge in other terms seem to have systematically failed. This suggests that there may be a correct and informative definition of knowledge to be found, but this definition is just too long to be an elegant philosophical hypothesis, but not too long to correspond to what the brain actually does when judging these claims.

So here's what I propose that the true definition of knowledge might look like:

We describe messy physical processes by talking about symple mechanisms, and a notion of what these mechanisms tend to do "ceterus paribus". People agree surprizingly much on which mechanisms approximate what (e.g. how to go from facts about swans to claims about the swan lifestyle, how to divide up actual dispositions to behavior into "behaving normally" vs, "something special happening whereby the ceterus aren't paribus"). One thing that can be so approximated is human belief forming. We think about actual human belief formation by saying that it "ceterus paribus" it approximates combination of various belief forming mechanisms (e.g. logical deduction, looking etc). A reliable beleif forming mechanism is one whose ceterus paribus behavior yields true beliefs.

Certain belief forming mechanisms are popular, and remain popular with people even when they undergo lots of reflection. Some of these are cannonical, in the sense that we count them as potential conduits for knowledge. But, if we ever come to believe that some such mechanism is not reliable (jn the sense defined above) we will stop saying that beleifs formed via it count as knowledge. So here's what I think a correct definition of knowledge might look like.

We have, say, 300 cannonical reliable mechanisms for producing knowledge, 200 cannonical reliable mechanisms for raising doubt (100 optional and 100 obligitory), and 200 cannonical reliable mechanisms for assuaging doubt. Call these CRMs. Our definition starts by giving a finite list of all these CRMs.

You know P, if and only if your belief in P was generated by some combination of CRMs for producing knowledge, and you went through CRMs from assuaging doubt corresponding to a) all optional CRMS for doubt raising that you did engage in b) all non-optional CRMs for doubt assuaging that apply to your situation.

Even though this is just a claim about what the form of a correct definition of knowledge would look like, it already has some reasonably testable consequences:
1. That situations where it seems unclear of vague what mechanism best describes a person's behavior (should I think of the student as correctly applying this specific valid inference rule, or fallatiously applying a more general but invalid inference rule?) will also make us feel that it's unclear or vague whether the person in question has knowledge.
2. That we should seem unclear whether to attribute knowledge about when reliable but science fictiony and hence non-cannonized mechanisms are described. For example, most people would say it's OK to take delivarances of the normal 5 senses at face value, without checking them against something else. But what about creatures with a 6th sense that allowed them to reliably read minds, or form true beliefs about arbitrary pi 01 statements of arithmetic (imagine creatures living in a world with the weird physics that allows supertasks, and suppose that they have some gland that has no effect on conscious experience, but whose deliverances reliably check each case). Would they count as knowing if they form beliefs by using these?

Wednesday, May 19, 2010

New Uses for Conceptual Analysis

Coming up with a systematic way to paraphrase sentences involving some wacky new term W in biology, sociology, psychology, or art criticism, into sentences that are just a logical product of claims about sets, mereological sums of other more commonplace objects, and preserves all our intuitive reasoning about W, is useful in three ways.

a) New Applications for Old Knowledge: Getting a method of paraphrase lets us bring our logical/set theoretic/substantive knowledge about the terms used in the analysis to bear on the new term in question. If the facts about the Ws parallel the facts about sets of such and such kind, set theory may have interesting implications for facts about the Ws.

b) Avoiding Adding Terms Which are "Incoherent" or "Have False Presuppositions": Getting a method of paraphrase may let us prove the consistency and conservativity of reasoning about the wacky new entities. For example if you analyze 'x is bachelor' as 'x is unmarried & x is a man', and then only accept informal reasoning about bachelors that can be reconstructed using this analysis, then it is clear that adding the term 'bachelor' and doing this informal reasoning will not allow you to derive contradiction, or any other new consequences. So, adding informal reasoning about bachelors will do no harm. Here the proof theory (any proof of P which uses the term "bachelor" could be turned into one that doesn't) is so obvious that it's easy not to notice. But the mathematical issues involved in showing consistency and/or conservativity of adopting some term (together with analytic feeling reasoning that goes with that term) can become more interesting when conceptual analysis only provides an *implicit* or recursive definition of the term.

This is valuable, to the extent that you are worried a purported new concept may be 'incoherent' (in the sense that intuitive, analytic feeling reasoning about it literally lets you prove contradiction) or may have bad 'presuppositions' (in the sense that that intuitive, analytic feeling, reasoning using the term allows one to derive new propositions not using that term, which are false)

c) Teaching: Obviously getting a method for paraphrase sentences involving new terminology in terms of old terminology provides a way of teaching the new terminology to people who already understand the old terminology.

Note that none of these purposes require that conceptual analyses be unique. Different analyses of claims about, say, the imaginary numbers, in terms of set theory can each serve this purpose equally well. Nor do these uses for conceptual analysis require that one make any claim about the metaphysical status of the objects in question. It's useful to know you can reconstruct all intuitively acceptable reasoning about the imaginary numbers in terms of intuitively acceptable reasoning about sets, even if you don't want to claim that the imaginary numbers ARE sets or anything like that. Nor, lastly, do they require that the analyses have some kind of psychological reality - that when you are thinking about imaginary numbers you are really somehow implicitly (subconsciously?) considering one or the other paraphrase in terms of sets.

[Hidden agenda: Even if it turns out that Occam's razor doesn't apply to positing special sciences objects like livers, species, trade deficits and languages, so there's no need to look for paraphrases which would allow us to *deny* that such "extra" objects exist, finding Quinean-style parapharases will still be illuminating and useful for other reasons. So we philosophers won't be talking ourselves out of a job :). Also, to the extent that you feel like something substantial is going on when one looks for Quinean methods of paraphrase, this may be because these paraphrases illuminate the structure of our intuitive reasoning about Ws, and let us relate the W facts to facts about objects we understand better - not because there is a serious question about whether the Ws really exist.]

Tuesday, May 18, 2010

A Depressing Theory of Ceterus Paribus Clauses

We want to say "sugarcubes dissolve in water, ceterus paribus", but what does that mean? Philosophical analysis of the phrase ceterus paribus has proved surprisingly difficult. For example, the quoted sentence doesn't mean that all or most pieces of sugar that actually will be dropped into water will dissolve.

Here's a depressing proposal for how ceterus paribus clauses work. We have a substantive (implicit) theory of what "the normal cases" are like, which is based on human daily life and maybe some random traditions too. We use this when evaluating ceterus paribus sentences to choose which way of making the target sentence true to consider. So, for example, 'ceterus paribus' clauses get filled in so that "dropped eggs break, ceterus parbus" is true, because people tend to hang out in places near the surface of the earth, which don't have thick rugs, so it's part of our substantive theory of what's "normal" that when something is dropped there's a hard surface below it (as opposed to a thick rug, or the empty expanse of space).

Sunday, May 9, 2010

Miniature Phil Math

Almost everyone agrees that our mathematical talk is practically helpful. Unlike astrology, doing math helps us build bridges. But how is math practically helpful? And does the way in which talking about numbers is practically helpful give us any reason to think numbers actually exist?

In this tiny essay I will propose a theory of how the practice of talking as if there were numbers is helpful. Then, I will say that we can appeal to numbers to explain how this practice is helpful, though there are also other correct explanations for this phenomenon which do not commit themselves to numbers. I will conclude by turning to the question of whether there are numbers. On the basis of the previous section I will propose that we do not *need* to posit the existence of numbers to explain the practical usefulness of our mathematical talk. However, we have another reason to believe in numbers which is the following: We want to make statements like "the number of cupcakes doubles every day" true (under certain circumstances), and the pattern of inferences we make with this sentences is quantificational. But this (being describable by some true sentences associated with a existential pattern of inferences) is the only thing that the many different kinds of non-mathematical objects which intuitively exist have in common.

1. How talking about abstracta like numbers is helpful

Talking about abstract objects, like numbers, is helpful because it lets us economically hypothesize patterns 'in the world around us' as well as patterns that might be described as artifacts of language (patterns in which distinct descriptions are logically or otherwise necessarily equivalent). We can say one sentence (about numbers) that will cause people to be willing to infer infinitely many different sentences that aren't about numbers.

For example, suppose I say: "The number of cupcakes doubles every day" This is a claim that quantifies over numbers and days, in the sense that we might represent it as "Ad An if d is a day, and n is a number, then there are n cupcakes on d there are 2n cupcakes on the day after d. "
Hearing this single sentence will lead my listeners to accept many different statements that do not quantify over cupcakes:
"if Ex7 cupcakes today Ex14 cupcakes tomorrow."
"if Ex8 cupcakes today Ex16 cupcakes tomorrow."
"if Ex7 cupcakes tomorrow Ex14 cupcakes the day after tomorrow."

2. What role do abstract objects play in explaining why talk of abstract objects is helpful?
Now we can ask: what role do various objects play in explaining the success of this talk? We might explain the helpfulness of my statement by saying that it is helpful because it...
- lets us track and predict what cupcakes there are and will be
- lets us track *the pattern in* what cupcakes there are and will be
- lets us track and predict how *the doubling function* relates *numbers*, and then predict what cupcakes there will be when, by relating this to facts about the behavior of the doubling function.

It seems to me that all of these are intuitively decent explanations. I take it that what we have here is a typical phenomenon where the same phenomenon (a war) can be explained by accounts that quantify over various different objects (countries vs. people vs. atoms). However, not much would be lost if we just stuck to giving the first explanation, which does not involve any mention of abstract objects.

3. Are there numbers? A good and bad reason for believing in numbers.

If this story about how math is practically helpful is right, should we believe that there really are objects of the kind talked about in these explanations e.g. patterns in the provenance in cupcakes, or numbers and a doubling function?

I don't think there is an *inference to the best explanation* for the existence of patterns in the provenance of cupcakes, or numbers from the helpfulness of this talk. It's not the case that we *need* to posit abstract objects called "patterns in the provenance of cupcakes" or "numbers" to explain how saying the thing described above could help people cope with the cupcakes around them.

Instead, I think it's reasonable to believe in numbers because we have an intuitively true sentence ("the number of cupcakes doubles every day") which allows a existential pattern of inferences - and playing this logical role is all there is to being an object.

The idea here is that when we look at the variety of different "objects" in the world e.g. electrons, magnetic fields goats, holes, waves, contracts, countries, these different kinds of talk don't seem to have much in common with regard to their relation to the physical world. What they do have in common is the pattern of inferences we make between sentences between them. In each case we accept sentences, such that the inferences with these sentences in are elegantly captured (in first order logic) by something of the form "Ex Fx". Now it turns out that talking about numbers and the doubling function shares this same feature.

Wednesday, April 14, 2010

Contrast w/ Tait "The Platonism of Mathematics"

Both my view (Lumpist Platonism) and Tait's might be considered unusual or quirky versions of platonism. Platonism (in phil math) is the view that mathematical objects exist.

I think that the world is fundamentally (something like) a space-time manifold [as opposed to a set of facts, or a set of objects and relations], and that all statements are true or false in virtue of how the manifold is. This includes statements about objects, and different statements about objects will correspond to very different claims about the state of the manifold (e.g. saying that there's a table vs that there's a whirlpool vs a trade deficit vs. a marriage contract vs. a number or string of symbols or a proposition). So facts obtain, and objects and relations exist, in virtue of how the physical stuff of the world is configured, not vice versa. Necessary truths (like all statements of pure math) correspond to the trivial claim about the state of the manifold (one that doesn't rule out any possible configurations).

Tait, as I understand him, thinks that mathematical sentences show that objects exist by constructing suitable objects. He writes "A proof is a presentation or construction of an object: A is true when there is an object of type A and we prove A by constructing such an object."

Both of these views contrast with what you might call a "two worlds" version of platonism. On this view: in addition to whatever objects exist in virtue of the physical stuff of the world comporting itself a certain way, there is also an "extra" component of reality. So far as I understand the force of the word "extra" here, the point of saying that there's an extra component of reality is this: An infinite and putatively exhaustive description of the world given purely in the language of microphysics e.g. (this point has that property, this point has that property etc.) would be missing out on the existence of sets, *in some stronger sense then the sense than in which it would be missing out on rabbits and trade deficits*.

Tait and I also agree that sentences are the right place to start when considering how semantics relates to metaphysics and ontology. For a sentence to be meaningful you just need the whole sentence to somehow make a claim about the world. Thinking about particular words in the sentence as having favored relations with particular chunks of matter will help in some cases but not others.

However, I disagree with Tait on some really important points:

Firstly, I don't really understand what he means by construction. The best sense I can make of the idea of constructing mathematical objects (how can you bring an abstract object into being?) is that it's something like the way I can create a) a marriage contract with another person by signing things the courthouse, or b) the set with ur-elements {Sharon's mullet} by giving myself an ill-judged haircut and thereby bringing a particular mullet-token into being, and hence it's corresponding singleton. But if this is what he had in mind, then...
a) it has the (at the very least) wildly counterintuitive to say that there wasn't a number between 3 and 5 before someone wrote down a proof inscription.
b) quantification in math works very weirdly and differently from quantification in general. For, since people have only written finitely many proofs there will be some number - say 347892-, such that no one has inscribed a proof of "3457892 has a sucessor". On the other hand, we certainly have inscribed proofs of "Ax if x is a natural number then x has a successor". So it would seem that the general statement is true. But the instance is (at the moment) false.

Secondly, Tait doesn't seem to allow that quantified statements of arithmetic (like, say, the Godel sentences for various formal systems) already have truth values now. He seems to think we are free to choose which kinds of proofs to construct (i.e. what formal system to adopt). And then he says that "the incompleteness of formal systems such as elementary number theory can be proved by induction, is best seen as an incompleteness with respect to what can be expressed in the system rather than with the rules of inference." And he points out that by extending the language (and adding suitable instances of the induction schema) you can prove the Godel sentence (and con) for this system.

But when I wonder about e.g. con(PA+X) [it's pretty hard to wonder about con(PA) imho] or con(ZFC), I'm not just wondering whether I could extend my formal system in such a way as to allow these sentences (or their negation) to be derived. Obviously, I could start making derivations (and hence constructing objects, for Tait) in any formal system I want. Nor am I pondering what kind of lifestyle choice to adopt in the future. Rather, I think that *right now*, I understand what it means to ask whether there's a proof of 0=1 from ZFC. And this is what I want to know. Is this sentence provable in that formal system or not? Is there such a proof or not?To the extent that we can ever be sure that we really understand something, and are asking a sharply meaningful question, this is it! [I think this may be why my advisor PK disagrees with Tait too]

Overall, I'm tempted to suspect that Tait is getting into bed with unattractive antirealism because he wants to avoid an epistemological problem. He sees how (/doesn't worry about how) you could know that something exists if you are able to bring it into existence (construct it). Such knowledge is sometimes called "maker's knowledge". And then he wants to say what mathematical knowledge is, in such a way that all mathematical knowledge turns out to be accessible in this way - which leads to weird consequences about large numbers, and unknown arithmetical facts.

In contrast, if you use the ...ahem... magic of Sharon's thesis, to provide a general naturalistic mechanism for how physical creatures could have gotten a faculty of reliable rational insight into abstract mathematical/logical truths :) , then you don't have to do any of this fancy (and potentially distorting) footwork.

Parsons and Intuitability

I've just been summarizing CH1 of Charles Parsons' Mathematical Thought and it's Objects. It set me thinking that Parsons is oddly concerned with whether you can "see"/percieve/intuit mathematical objects. I say oddly, because IMO what matters for assuaging worries about the weirdness of mathematical objects or the weirdness of our knowing about them (which seems to be part of his aim) isn't whether we can strictly speaking *see*/perceive/intuit abstracta, but rather a) whether positing abstracta isn't a violation of Occam's razor and b) how there can be enough of a connection between mathematical facts and our dispositions to form beliefs about them, for what we have to count as knowledge.

I mean: even in the empirical case, questions about what we can see, as opposed to merely inferring from what we see are super murky. Who knows whether you can "see" that the light is on vs. that the electricity is back on vs. that Jones succeeded at his task etc. as opposed to inferring them or justifiably and reliably forming true beliefs about these subjects)? What matters (for the epistemology worry b) is just that there needs to be some suitable and clear reliable mechanism at work leading you to form true beliefs on these subjects - as there obviously is in the empirical case of the light. Once we see how this reliable mechanism could work, it's (in my opinion) a matter of indifference whether you want to describe this mechanism as seeing the light and then immediately and unconsciously but justifiably inferring that the electricity is back on vs. directly seeing that the electricity is back on.

And the same goes for knowledge of mathematical objects. What we'd like is something that was like perception in the sense that it provided an unproblematic mechanism whereby we could get the relevant kind.Once we have that in place, we can say whatever we like about whether someone staring at a piece of paper can see/percieve/intuit that there's a proof of SS0+S0=SSS0 in PA, or a palindrome containing the word 'adam' vs. merely reliably and justifiably infer these statements from the concrete object that they do see. The million dollar question is how we manage to do this putative seeing/inferring correctly.

Similarly, if someone thinks that construing math as stating truths about genuine abstract objects is a violation of Occam's razor, (as per objection a) they aren't going to be impressed by claims to "see" the abstract object (a string) in the concrete object (a series of inkmarks). When the Platonist stares at the sheet of paper and says they are seeing that there's a proof SS0+S0=SSS0, the Fictionalst will say that they are seeing that there would have to be a proof in the relevant mathematical fiction, and the modalist will say you are seeing that a certain proof is possible.

My point here is not to knock Parson's interest in the relationship between concrete things you can see and abstract mathematical objects. Hearing him talk about this connection was a decisive inspiration for my own view, and I think it's absolutely crucial to think about the concrete physical processes going on when we form and revise mathematical beliefs, if you want to understand how creatures like us could know about (or even think about) something as abstract as math. But I would claim that the key point about string inscriptions isn't what they represent/allow us to intuit (can you stare through the string inscription to the string itself?, can you at least see that a certain string exists?), but (as it were) what you take these inscriptions to represent, i.e. how you are willing to form and revise your beliefs about other things, like strings as abstract objects, in response to seeing them. This is what starts to give us traction in linking up our dispositions to form mathematical beliefs to mathematical facts, to answer challenge (b). (IMO answering challenge (a) requires something else entirely, namely Lumpism, but more about that in the next post)

Parsons Mathematical Thought and its Objects CH1 summary

No one I've talked to is really sure what's going on. Especially me. But here's my current best guess. Maybe the magic powers of saying something wrong on the internet will help us work our way incrementally to a better interpretation.

1. Abstract objects defined + generic worries about them

Mathematical objects would be abstract objects = acausal, not located in space and time.
Worry: They aren't perceptable, if perceiving something requires locating it. Maybe this suggests there are no such things?
- electrons don't seem to be directly perceptable either, but they exist
- if we say that mathematical objects don't exist then we will have to explain why talking as if they did is so helpful for science.
- it's not clear whether we can avoid quantifying over abstract objects, hence (if we accept Quine's criterion) saying that they do exist.

2-3 What is an object?

It's hard to answer the question 'what is an object?' since unlike with gorillas we can't point out a contrast class of things that aren't objects.

the right answer: logical role
Philosophers usually ask 'what's an object?' in the context of trying to figure out how language can relate to the world - how we can talk about objects. For these purposes we can define being an object in terms of logical role: objects are what we talk about by using singular terms (e.g. 'Bob' in Bob is happy= Happy(Bob)) and quantification (e.g. 'Ex x happy').

other conceptions of objects/requirements philosophers have had for objects...

i. actuality/causal powers
Digression about Kant: general notion of object vs. "Wirklichkeit"
Kant invented the phrase 'concept of an object in general'. Kant's "categories" are concepts of an object in general. He is conflicted about whether these categories have to be perceivable by the senses [and hence whether "the concept of an object in general" would allow abstract objects?]
a) the categories are supposed to be derivable from logic and general considerations that don't take into account anything specific about the kind of object involved.
b) applying the categories is only supposed to generate knowledge when combined with stuff from the senses (namely: " the manifold given in sensory intuition")
Kant and Frege seem to have a notion of the actual = "wirklich" which only applies to objects you can causally interact with
Kant clearly accepts mathematical objects in some sense, but it's not clear whether he somehow thinks they are merely possible.

Idea: Many people find abstract objects spooky because they assume that they would have to be Wirklich, or something like it. The merely logical conception of object above doesn't require any such thing. So maybe mathematical objects exist in the logical sense i.e. we can state truths using singular terms for them and using quantifiers, but they are somehow not Wirklich.

ii. intuitability

Kant digression:
You use intuition to discover whether things could fall under it. [presumably round square would be an example of a putative concept that doesn't pass this test.]
geometric figures = forms of empirical objects
We can learn about them using intuition.

Perhaps it's an requirement that all objects are 'intuitable'?

defining intuitable
We will use intuition to mean a kind of perception that could apply to physical objects or abstract objects. We can distinguish
- having an intuition of an object, like perceiving an object (e.g. 'I intuit the equilateral triangle')
- having an intuition that some proposition about the object holds (e.g. 'I intuit that the interior angles of the equilateral triangle add up to 180')

Some issues:
-Should we require that one can have intuition *of* the object, rather than merely intuiting some suitable proposition about it? (call this strong intuitability) Or is it enough if you have an intuition of concrete objects that represent abstract objects, like the sequence of strokes Kant appeals to in his proof that 7+5=12? (call such a representation a quasi-concrete representation)
-On what sense does need to be possible to intuit something for that something to count as intuit*able*, and hence satisfy the requirement?

Idea cont. - Maybe mathematical objects are real in the logical sense, and intuitable, but not wirklich/causally effecations...

4. objecthood=having the logical role of an object

We will stick with Quine and Frege and say that the logical criterion (not wirklichkeit or intuitability) is all that's required for objecthood.

Some questions arise if you accept this definition of "object", about how to further spell out the view.

a) Which logic has the property that *its* singular terms and quantifiers correspond to objecthood? Maybe we should allow modal or other intentional notions, and if we do we will get different answers about what objects there are.
b) Maybe there are some entities which aren't objects? (i.e. maybe there's some important ontological category that's wider than objecthood - like some kind of meinogian being)
c) Maybe there are some objects which don't exist? (i.e. maybe there's some important ontological category that's narrower than objecthood - like fictional objects might be said to logically objects, but not really exist)

5-6 are about b and c respectively

7. Quasi-concrete objects

We will call abstract objects quasi concrete if they have a special relationship to certain concrete objects that 'represent' them e.g.
strings of letters --- inscriptions of strings of letters
sense qualities --- experiences of those sense qualities
shapes --- physical things that have that shape

We can look at the physical representatives, and keep in mind individuation criteria for the abstract objects. These individuation criteria say when two different concrete things 'represent' the same abstract one.

Some sets are quasi-concrete: sets with concrete ur-elements are represented by those ur-elements. But pure sets are not quasi concrete.

Overall Conclusion: mathematical objects exist in the logical sense, although they are not Wirklich, and although some of them are not intuitiable even in the weak sense allowed by looking at concrete objects that represent them.

Friday, April 9, 2010

Field on Normativity and Logic

In "What is the Normative Role of Logic" Field argues that you can't understand logic descriptively as (eg. the project of studying necessarily truth preserving syntactic manipulations), and so are forced to a more normative conception of logic (logic is the study of how one ought to reason), by the following dilemma.
-classical logics can't state a general truth predicate (if they could, we could inductively argue for the soundness of logic, and hence a consistency proof for logic L in logic L, contra Godel 2)
-non-classical logics which can state a general truth predicate, sometimes fail to preserve truth, in some degenerate cases (in places where good reasoning wouldn't lead you to in the first place).

So (Field says) the only people who can *state* the descriptive criterion for being a logic, deny that logic has to have that property.

But I think there's a gap in this argument: why should you have to be able to state your criterion for what a good logical system is, *in the formal language of that logic*? In particular, why can't the anti-normativitst about logic reply like this:

A. Classical Logic Version:

Logic is the study of formal systems of syntactic manipulation which are truth preserving for various fragments of our language (e.g. english sans any truth predicate, english sans any repeated application of the truth predicate). Practically speaking, this is all we need for almost every purpose except philosophy of logic and truth. And the moral of Tarski-Godel considerations above is that this is all we can get.

Formal, exceptionless, rules for truth-preserving reasoning are great when you can get them (i.e. for limited fragments of our language) but what Field has shown, is that we can't get any such rules that apply to the informal notion of truth (as opposed to the notion of truth-of-a-sentence-in-L, for various restricted L)

Admittedly, taking this route involves giving up the traditional and somewhat attractive Fregean idea that logical principles are fully general, and hence would apply to all possible reasoning, but - at least- this seems way less revisionary than the normative relativism about logic where Field winds up.

B. Non-Classical Logic Version:

It was indeed wrong to say that logic studies patterns of inference that are always truth preserving. Field is right that Logic studies patterns of reasoning that are truth preserving "where it counts". But "where it counts" doesn't mean something normative like 'with regard to premises that one could be justified in believing', but rather, something descriptive like 'with regard to premises that people are likely to every actually accept'.

Learning about numbers by thinking about sets

Maybe I just haven't done enough research yet, but I don't see why it's puzzling that we could learn new things about the numbers by learning things about the sets, and then applying them, given that we know perfectly well how facts about the numbers relate to facts about the sets (some people even identify the numbers with certain sets).

I mean: Is it puzzling that adding to a theory of shapes on a computer monitor (e.g. trangle, square etc) a theory of individual pixels that make up the shapes should let you derive new consequences about what shapes the monitor can display? I don't think this is puzzling - we see phenomena like this all the time e.g. new facts about chemistry can teach us new facts about how DNA will behave, hence about biology.

Or what about the way that reasoning about sets (with ur-elements) could teach you things about ordinary objects: If there's no non-empty subset S of the people you invited to the party such that each person is in that subset was formerly married to some other person in S, then if anyone shows up to the party (and only invited people come), there will be at least one person who fails to meet an ex-spouse there.

I am tempted to suspect that this whole thing is not a problem if you are as much of a realist about math as about computer displays or chemestry or biology or party-goes, and if you face problems about how we can *ever* know *anything* about mathematical facts, head on. (what my thesis claims to do). I mean, maybe if you thought that all mathematical knowledge was just a matter of stipulative definition it would puzzle you how we could learn things about the numbers from reasoning about the sets which was (presumably) not part of the stipulative definition of the numbers (or the sets?). But even then, the mere fact that we can *ever* know bridge laws relating the numbers to the sets should be puzzling, not the fact that these bridge laws are fruitful...

Does anyone have ideas for a more charitable understanding of the concern here?

Sunday, April 4, 2010

McDowell on Rule-Following pg348

In 'Wittgenstein on Following a Rule' McDowell's objection to the idea that language use just involves contingent agreement among speakers in their dispositions to go on in the same way, rather than some linguistic community in a richer McDowellian sense seems to be this. If the former view is right, we can never have more than "inductive" certainty that the rest of our community uses the word the same way. Hence, when we apply a certain term in a certain way, e.g. when we say "arthritis is inflamation of the joints" we can only be `inductively' certain that this expresses a truth - it's logically possible that everyone in our language community uses the word differently.

But why is this a problem? This supposedly bad consequence seems directly *true* in the arthritis case. Maybe it's worse to say that you can only be inductively certain that 2+2=4, since it's logically possible that your whole language community uses the word differently. But - come to think of it- don't we individuate language communities by common linguistic practice. So, arguably, if any community were to count as your linguistic community it would have to agree with you about many (most?) assertions that are really central to you, which you feel confident about. So the worry about the rest of our community using "2+2=4" differently enough for it to express a falsehood seems very very slender.

p.s. does anyone know if McD thinks he has a transcendental argument for the existence of other people, from the claim that we can have meaningful thoughts, and hence must belong to some non-private-language community?

Sunday, March 28, 2010

Different Senses of the Quantifers?

Carnapians want to say that different things can be truely said to exist when speaking in different language-frameworks. So the existential quantifier "Ex" will mean different things in these different frameworks. But can there really be multiple different meanings for these different uses of Ex, which would qualify as different kinds of e.g. existential quantification?

An argument that you can't is: The meaning of Ex is determined by its introduction and elimination rules. So any putative kind of existential quantifier would need to obey them. Hence different senses E1 and E2 from different frameworks would both have to obey the standard introduction and elimination rules for Ex. But if E1 and E2 obey these rules, then you can prove E1x from E2x and vice versa. Hence there is no room for ambiguity.

This argument can't be right though, if restricted quantification ('There is nothing in the fridge'. 'All the beers are in the fridge') - something that even the most ardent anti-Carnapians accept- counts as `a kind of' quantification. And intuitively it is. Hence in order to seem like a kind of quantification, a connective need not obey the full introduction rules. It suffices if there's a more limited range of instances of the introduction schema
P(x) --> Ex P(x) that speakers accept, together with all corresponding instances of the elimination schema Ex P(x). (A^B^C..^P(z) > F) ---> F (in cases where z does not occur free in A,B, C... or F). This is what we have for beers in the fridge.

Why can't the Carnapian claim that the same thing goes on with different linguistic frameworks? The different choices for when P(x) --> E2x P(x) is acceptable will each correspond to a different meaning for the existential quantifier. We can even formally represent these different possible senses for existential quantification formally, by saying a kind of existential quantification E_i corresponds to each subset S_i of the set of predicate-expressions (i.e. to each choice of what predicate-expressions the introduction and elimination schema are supposed to hold for).

You are probably worrying that this turns the Carnapian into a kind of maximalist (all the objects in question really exist, different frameworks just correspond to different framework restrictions) but I can't actually see any argument for that. So speak up if you can!

Thursday, March 18, 2010

Seeing Sets

I used to laugh about (early) Pen Maddy claiming that we could see sets. But now I think that's almost right- though not in the way that Maddy intended it.

I can see that my program doesn't infinitely loop, or that the 1000th prime is 7919 by pressing enter, waiting a few seconds and then looking above the command line prompt on my computer. These are all claims about mathematical objects, yet (given suitable equipment and background knowledge, we would ordinarily say that I can see these things to be true).

This seems just as literally true as the claim that I can see that the electricity is on, when I look at the lit windows of the house next door.

In both cases I immediately form the belief, probably am justified, am depending on a lot of contingent assumptions about electronic wiring etc.

But maybe we should distinguish seeing Xs from seeing that some fact about Xs obtains? Maybe there's something especially problematic about believing in objects which you can't see?

-If seeing x = seeing that x exists, then I can see that there is a 1000th prime in the above example (suppose I wrote the program but had never seen the proof that there are infinitely many primes)

-If we take a more intuitive approach to seeing xs (i.e. is it awkward to say I am now looking an X) then:
a) certainly it is awkward to say `I am now looking at a number'...hmm though we might say `I am now seeing the line of the program that causes the crash (and lines in programs are abstract objects, just like lines in poems),'.
b) it's also pretty awkward to say `I am now seeing a drought', or `I am now seeing North America' or 'I am now seeing a proton'.

If you can see a drought when you look at a color map of precipitation, why can't you see a pair of twin primes by looking at a chart?

Overall conclusion:

Seeing that P really means little more than having some visual experience which causes you to immediately believe that P, which you might cite as part of your justification for believing that p. So if you can know things (e.g. all the background mathematical beliefs involved in the program case) about numbers, then it's not too hard to arrange to see things about them.

Of course, the anti-platonist won't think that you can know things about numbers either - well that's where my thesis comes in. But if we can know some things about the numbers, its not hard to arrange things so that we can see further things about them ie rig up reliable methods for forming beliefs about them whose last step involves visual experience.

Tuesday, March 16, 2010

"Ideal" vs. "Ideal"

Scientific explanations, which explain the behavior of an actual object by relating it to the behavior of an ideal object, don't usually involve a normative element. It's not as if we think that inclined planes should be frictionless, or planets should be perfectly spherical. These ideal models aren't somehow better then the actual objects in question, they are just easier to think about.

I wonder if psychological explanations of actual human behavior by relating it to rational human behavior ("the price rises because if everyone was a homo economicus with this set of beliefs and desires they would..." "actually, getting a beer is what a fully rational person with Jim's beliefs and desires would do right now..") are just instances of this. If they are, then the normativity makes no difference to the explanation. The idea that one ought to be rational (assuming there is such a fact) plays no more role in the success of the explanation than the claim that inclined planes ought to be frictionless plays in the success of the ordinary physical explanation.

Potter and the Loch Ness Monster

M. Potter asks why some philosophers intuitively require so much less evidence for introducing abstracta than for concrete objects. How come the requirement not to "multiply entities beyond necessity" doesn't apply to these? Without an answer taking this relaxed attitude towards positing yuppie cliques and category theoretic arrows, while being very skeptical about the Loch Ness Monster looks a bit unprincipled. Well here's a sketch of an answer.

Start with a reliability based notion of justification: we evaluate a creature's justification by thinking of it as having certain faculties i.e. mechanisms that reliably produce true beliefs (e.g. infra-red vision, smell, first order logic). We say a belief is justified when it is the result of one of these reliable mechanisms 'working as intended'. Now in order for mechanisms that produce contingent beliefs to be reliable, they will typically have to be causally sensitive to facts about the outside world - so that e.g. they tend to only produce the belief "there's a llama" in situations where there is actually a llama. In contrast, you can build a faculty that reliably produces the right results about necessary aspects of the world, without using any such external input. And if there are necessary truths such as: whenever there are yuppies behaving in such and such a way there's a clique of yuppies, you can build in a reliable mechanism that makes this transition immediately, without requiring any further input from the environment. So it's not surprising that the reliable belief forming mechanisms we humans have should require less justification for introducing necessary abstracta, or ordinary objects whose existence is necessitated by already known facts about other objects vs. for introducing concrete objects (like the Loch Ness Monster) which lack either of these properties.

Now obviously, what I just said won't convince anyone who has some *other* other reason for rejecting abstract objects, and ordinary objects, to believe in them. But it does provide a unifying explanation, and hence (I think) a way for those who a) have the intuition that introducing abstract objects needs less justification and b) are inclined to take this intuition at face value to defeat Potter's challenge that their intuitions about justification are unprincipled. Quite to the contrary, this distinction falls out of a reliable-mechanisms theory of justification almost immediately!


If (all) propositions intrinsically have a logical structure, then does an english speaker's utterance of "I will go to the store unless you already bought milk" typically express a proposition with the structure ~P>Q, or one with the structure PvQ?

Does it depend on the situation? Who bought milk last time? :)

It seems better to say that propositions expressed by natural language sentences only have a logical structures only relative to a choice of logic, and a method of translation.

Sunday, March 14, 2010

Carnap Disenchantment

[Sigh, I can never make up my mind about Carnap. I guess I'm feeling anti today]

I understand what it is to say that it's "merely a pragmatic choice" whether to use first order logic with the usual connectives vs. with the sheffer stroke. In both cases you will be expressing truths when you derive things in accordance with the logical laws, so the only harm you can do with choosing the sheffer stroke is make your proofs take longer.

But the "choice" of accepting a weaker vs. stronger and (say) inconsistent logical system, does not have this feature. In one case, you will be deriving truths. In the other case, you will now be deriving some falsehoods/crash your whole language so that none of your sentences are meaningful at all.

So I don't see what Carnap can mean by saying that adopting a system is merely pragmatic choice. Adopting a consistent system is a hard epistemic task! The only pragmatic choice is choosing which system - of a menu of systems of reasoning which are coherent enough to give their terms meaning and count as truth-preserving - to use.

Saturday, March 13, 2010

Paradox of Analysis

The paradox of analysis is roughly this: If a conceptual analysis of a term like justice was successful, then the two sides of the analysis should mean the same thing, so it should also be trivial.

The notions of cognitive triviality (analyticity?) and sameness of meaning are infamously hard to spell out, but I think we can get much of the intuitive puzzlement of the paradox of analysis by rephrasing it as follows:

If you know already what 'justice' means, how can it be useful to you to have a conceptual analysis that says an act is just if and only if it is ____?

If you accept this restatement of the problem, I propose the answer is this:

Your "knowledge of what `justice' means" consists in something like a disposition to accept some collection of methods of inference, which - under favorable conditions- tend lead to your beliefs about what's just correctly tracking the facts about what's just. Call the particular algorithm for making and revising judgements about what's just α. So your understanding of the word justice consists in the fact that your brain implements α.

The potential usefulness of conceptual analysis comes from the fact that your brain can implement α without:

a) your knowing what algorithm α is (e.g. some processes in your brain recognize grammatical english sentences, but you don't know what these processes are).
b) your knowing that the descriptions of actions which algorithm α ultimately gives a positive verdict on are exactly those which have property B. (this is useful when your usual methods of checking for B-hood are faster/easier to deploy than your usual methods of checking for justice)
c) your knowing that property C applies to most of the things which A would ultimately give a positive verdict on, but C is easier to apply, and all the purposes normally served by considering which actions are just would be served even better by thinking about which actions have property C. (the classic definitions of computability and limit are examples of this kind)

Thursday, March 11, 2010

"Coherence" and Mathematical Existence

When I say that "the more practically benign a system of proto-mathematics is, the more likely it is to count as expressing largely true claims about some domain of objects", I realize that this sounds horribly woolly. People naturally ask me: but how practically benign does a practice have to be to guarantee that it succeeds in talking about some object? (not to mention: how do you measure `largely true'?)

Why Be Woolly

Here's a classic example of a claim that isn't woolly:

H: Any logically consistent system of mathematical beliefs counts as expressing truths about some suitable domain of objects.

We can see H is false because it implies that if I believe ZFC+{X} and you believe ZFC+{~X} where X is some statement about number theory independent of ZFC, since we both have logically consistent systems of belief, we will both be right - just talking about different objects.

But what goes wrong?

Note the problem isn't that there aren't enough mathematical objects (if we just have sets every first order consistent theory has a model). Rather (I claim) it's because actual people will use words in the mathematical theory like 'finite' or 'smallest' or 'number' which have meaning that goes beyond their role in this first order logical stipulation.

When we both say that by the "numbers" we mean (among other things) the smallest collection containing 0 and closed under successor, smallest (intuitively) means the same thing for both of us, so it is NOT correct to then interpret each of us as talking about whatever larger non-standard model makes our claim true.

Hence our informal use of the words like "smallest" or "all possible collections" imposes constraints on interpreting us which go beyond the first order logical content of our mathematical statements.

If you buy this, here's why you should be wooly. In general there will be some vagueness with regard to how wrong you can be about Madagascar, Christmas or to use Quine's famous example, atoms, and still count as talking about these things. Once a theory is sufficiently wrong it can be a tossup whether to say that the objects in question are real, and the person is wrong about them, or that there are no such objects. But this is exactly what we face with regard to mathematical objects as well! We have an amorphous informal practice, and a norm that people count as referring to whatever the most natural object is that best satisfies their methods of reasoning about these putative objects, provided there is one that matches suitably well.

There's no bright line about how wrong your various formal and informal beliefs about some putative object can be while you still still count as referring - for the same reason in math as in physics or history.

Hence, I don't try to draw one, and that's why I'm woolly on this issue, and why you should be to!

All we can say generally is: Mathematicians can posit new objects, and the more logically consistent their reasoning about these objects is, and the less their intuitions about consequences of reasoning about these objects lead to false conclusions about other things, the more likely it is that they will count as expressing largely true claims about some suitable piece of the mathematical universe.

The other non-woolly alternative is to give a list: a mathematician counts as referring if they have x beliefs (which are true of the integers), y beliefs (true of the reals), z which are true of imaginary numbers, w for quaterinians, v for sets, k for arrows ... and thats all the mathematical objects that it is metaphysically possible to think about! But surely this is insane.

trolly problems and literature examples

I've heard it suggested that moral philosophers should consider examples from literature rather than simplified cases as in trolley problems. Here's a theory of why examples from literature might be particularly bad for moral philosophy purposes.

Kant says (as I understand him) that the experience of beauty happens when observing an object provokes the "free play" of the conceptual faculties, producing a harmonious volley between the intellect and the imagination. This works most naturally for novels and poems, where reading a line can set off a chain of thoughts which aren't logical deductions, but are still somehow naturally suggested by the line.

In contrast, in much moral philosophy you are looking for (relatively) general principles [it's an interesting question why this is], that different people might agree to and be guided by even when particular interest leads them in different directions. So you want something like "all actions of X kind are impermissable", For these purposes, you want to show that your general principle is acceptable even in, as it were, the worst case scenario, even in the most perverse instances. You also want to avoid features that would be distracting, from the question of whether the given action is permissible, and also unclarity about what the descriptive scenario is supposed to be.

Now, if we buy the kantian idea about beauty we get a quick explanation for why literature examples will tend to be bad for the purposes of moral philosophy. Beautiful cases will be ones that promote the free play of the intellect, considering all kinds of different aspects of what's being described, and reaching out into all sorts of other questions. Hence they are particularly likely to involve a) simulatious application of multiple apparent moral reasons for and against b) interesting factual questions about what the situation really is (do we really know that soldier is unpersuadable?) c) other different but related moral/philosophical issues - that one might easily confused with the issue of unpersuadability.

So literature examples may be good a suggesting questions, but there's some reason to think they are - so to speak- actively engineered to be distracting when consider as examples in debate about some particular moral principle.

Sunday, February 21, 2010

Species and Couches

A smart philosopher of biology I know claims to be researching "whether there are (really) species, as opposed to just individuals". So far as I can tell, he is investigating whether biological explanations that appeal to species are not really always better put in terms of individuals. That is, he's studying whether talking about species serves a certain kind of (ineliminable?) role in biological explanation.

That definitely seems worth worth investigating - especially since there are so many cases where the distinction between different species looks very unprincipled. (Because of ring species and ligers it won't do to just say that two things are the same species if they can produce fertile offspring.)

But it seems strange to me that he puts this in terms of `investigating whether species really exist'. This is because, presumably, he thinks couches really exist, and yuppies too, even though we could surely phrase an adequate biological and scientific theory in such a way as not to entail any sentences of the form Ex couch (x) or Ex yuppie(x).

What I THINK might be going on is that he thinks objects need to earn their keep, in a way that concepts don't. That is: it's fine to apply scientifically useless predicates like " a yuppie", but not to introduce scientifically useless *objects* like species. On this reading he would be fine with saying that dogs exist, or that two newts are consepecifics, but not with saying that there are (abstract) objects called species.

But I don't see quite how one would motivate this differential treatment. (Admittedly this may have something to do with my current adherence to the merely logical notion of objecthood). Also, the problems for the notion of species looking unprincipled seem to apply just as much to claims about being a dog or being two animals being conspecific.

*Obviously some scientifically useless objects are bad to introduce, like the flying spagetti monster but that's because their existence would entail false claims about the distribution of matter in space-time. In contrast, just proposing new ways to think of the same old distribution of matter etc. in space-time as constiuting objects e.g. (tables, vs. half tables, vs. complete livingroom sets vs. dearths of tables) in different ways, seems harmless.

[edit: there is something SLIPPERY about the way I am using the concept/object distinction here. must think more about this, and ask the philosopher of bio]

Explanation Puzzle

On the one hand, we think that the fact that a theory T1 allows for "better explanation" of a certain phenomena than T2, gives us reason to believe T1 rather than T2 is correct. It's obvious (if not particularly explanatory) reason to prefer one theory to another that it "does a better job of explaining the data"!

On the other hand, we think that a better explanation can be one that better helps human beings "grock" patterns in the behavior of physical systems which may be mathematically very complex. Given human psychology, attention span etc. a simple ceterus paribus statement about struck matches tending to light can be a better explanation than an explanation that appeals to more specific details. To choose a more extreme example, even if there were a completely successful theory of microphysics, most people feel we would still have an explanatory task. We would still want elegant theories that told us about general high-level patterns in how the microphysical facts would evolve forward through time. (e.g. the ideal gas law, biology and maybe psychology and economics).

But now here's the problem: do we really think that the fact that a theory T allows for nice tractable/human-grockable explanations of high level phenomena makes it more likely to be true? For example:

I find "consider a spherical cow" style economics explanation, or "consider philosophers building up society from a state of nature" style early modern philosophy explanations way more attractive, satisfying, and easy to remember than explanations that cite lots of boring contingent historical facts. But this doesn't really make me feel that these explanations are more plausible, or getting at the heart of matters more.

I mean, I wouldn't be surprised if primate intelligence is optimized for avoiding getting double crossed by other monkeys, and making practical plans etc. so we like explanations better if they relate the explananda to these things (i.e, people with plans). Indeed don't we actually find this with explaining a phenomenon to people in different disciplines- that people familar with different areas find different explanations more satisfying?

We like explanations where lots of correct consequences "fall out immediately" from a tiny theory. But what seems to fall out immediately (vs. just be an ugly mathematical consequence) may well depend on how familiar you are with inferences of that kind. And folk (belief/desire) psychology is something we are *all* very familiar with from daily life. Hence, when someone says "this electron wants to escape the other electron" or "countries covet land", we have lots of immediate ideas about what behavior should follow from that, because we are experts at drawing consequences from belief desire psychology, and then we just convert these consequences back to the task at hand.

But surely allowing for nice parallels to common problems in monkey social climbing, is not a feature that has much to do with genuine theoretical elegance/ how likely a theory is to be correct.