Consider the following argument against quantifying over everything.

"It can't be possible to quantify over everything, because if you did, there would have to be a set, your domain of quantification, which contained all objects as elements. However, this set would have to have to contain all the sets. But there can be no set of all sets, by Russell's paradox argument."

I claim it's unsound for the following reason:

We presumably can quantify over all the sets (e.g. when stating the axioms of set theory). So, if (as this argument assumes) quantifying over some objects required the existence of a set containing all the objects quantified over, we would already have a set containing all the sets, hence Russell's paradox and contradiction.

Thus, meaningfully making an assertion about all objects of a certain kind does NOT require that there's a set containing exactly these objects.

---

BONUS RANT: Why would one even think that where there is quantification there must be a set that's the domain of quantification? Because of getting over-excited about model theory I bet. [warning: wildly programmatic + underdeveloped claims to follow]

Model theory is just a branch of mathematics which studies systematic patterns relating what mathematical objects exist and and what statements are always/never true. It's not some kind of Tractarian voo-doo that `explains how it's possible for us to make claims about the world'. Nor do sets (e.g. countermodels) somehow actively pitch in and prevent claims like "Every dog has a bone" from expressing necessary truths!

## Saturday, October 31, 2009

### Is "Set" Vague?

The (normal) intuitive conception of the hierarchy of sets is roughly this:

The hierarchy starts with the empty set, at the bottom. Then, above every collection of sets at one stage, there's a successor stage containing all possible collections made entirely from elements produced at that stage. And, above every such infinite run of successor stages, there's a limit stage, which has no predecessor, but contains all possible sub-collections whose elements have already been generated at some stage below.

But how far up does this hierarchy of sets go? Is there a fact of the matter, or does our conception not determine this?

The conception/our intuitions about sets doesn't directly tell us when to stop. For any stages we suppose we are looking at, it always seems to make sense to think of new collections that contain only sets generated by that point (e.g. the collection of all things so far generated). Of the sets generated by any collection of stages we can ask:

- Does the proposed next stage/limit stage of these stages really make sense? Are there really such collections?

- If so, are the collections generated at this stage still sets?

A textbook will tell you that at some point the things generated by the process above DO make sense, but DON'T count as sets. So, for example, there is a collection of all sets, but (on pain of paradox) this is not itself a set, but only a class.

However,

a) This just pushes the philosophical question back to classes: is there a point at which there stop being classes? Are there something else (classes2) which have the same relation to sets as sets do to classes? [One of my advisors calls this the "neopolitan" view of set theory]

b) We don't have any idea of WHEN the things generated by the process above are supposed to stop counting as sets.

Note that the issue with b) is not just that we don't know whether sets of a certain size exist. There are lots of things about math we don't know, and (imo) could never know. Rather, the uneasy feeling is that our conception doesn't "determine" an answer to this question in the following much stronger sense:

There could be two collections of mathematical objects with different structures, each of which equally well satisfies our intuitive conception of set.

For, consider the hierarchy of classes (note: all sets are classes). There might be two different ways of painting the hierarchy to say at what point the items in it stop counting as sets. Our intuitive conception just seems to generate the hierarchy of classes, not to say when things in it stop being sets!

In contrast, in the case of the numbers, I might not know whether there are infinitely many twin primes, but any two objects satisfying the intuitive, second order, characterization of the numbers would have to have the same structure (and hence make all the same statements of arithmetic true).

Thus, our intuitive conception of set seems to be hopelessly vague about where the sets end. Hence, even if you are a realist about mathematical objects, we seem forced to understand set theory as making claims about features shared by everything that satisfies the intuitive conception of set, rather than as making claims about a unique object.

Questions:

1. If you buy the reasoning in the main body of this post, does it give an advantage to modal fictionalism? e.g. the modal fictionalist might say: "You already need to agree with us that doing set theory is a matter of reasoning about what objects satisfying the intuitive conception of set would have to be like. What does incurring extra commitment to the actual existence of mathematical objects (as opposed to their mere possibility) do for you?".

2. An alternative would be to reject the textbook view, and say EVERYTHING generated by the process above is a set. Hence, you couldn't talk about a class of sets. Would this be a problem?

3. [look up] Is it possible that all initial segments of the hierarchy of classes that reach up to a certain point are isomorphic? (I mean, the mere existence of one-to-one, membership preserving, function that's into but not onto -the identity- doesn't immediately guarantee that some *other*, more clever, function that IS an isomorphism)

[maybe you can prove this is not poss by using the fact that one initial segment would have extra ordinals, and this iso could be used to define an iso between ordinals of different sizes which is impossible]

4. Is there some weirdness about the idea that collections in general (whether they be sets, or classes) eventually give out - so there's no collection of all collections.

We could say there are sets, classes, classes2, classes 3 and so forth. This lets us say there's a class of all sets, and a class2 of all classes etc. But as far as collections in general we must admit that there's no collection of all collections, on pain of contradiction via Russell's paradox.

Well, I don't personally find this that problematic. It's a surprising fact about collections maybe, but mathematics often yields surprising results.

The hierarchy starts with the empty set, at the bottom. Then, above every collection of sets at one stage, there's a successor stage containing all possible collections made entirely from elements produced at that stage. And, above every such infinite run of successor stages, there's a limit stage, which has no predecessor, but contains all possible sub-collections whose elements have already been generated at some stage below.

But how far up does this hierarchy of sets go? Is there a fact of the matter, or does our conception not determine this?

The conception/our intuitions about sets doesn't directly tell us when to stop. For any stages we suppose we are looking at, it always seems to make sense to think of new collections that contain only sets generated by that point (e.g. the collection of all things so far generated). Of the sets generated by any collection of stages we can ask:

- Does the proposed next stage/limit stage of these stages really make sense? Are there really such collections?

- If so, are the collections generated at this stage still sets?

A textbook will tell you that at some point the things generated by the process above DO make sense, but DON'T count as sets. So, for example, there is a collection of all sets, but (on pain of paradox) this is not itself a set, but only a class.

However,

a) This just pushes the philosophical question back to classes: is there a point at which there stop being classes? Are there something else (classes2) which have the same relation to sets as sets do to classes? [One of my advisors calls this the "neopolitan" view of set theory]

b) We don't have any idea of WHEN the things generated by the process above are supposed to stop counting as sets.

Note that the issue with b) is not just that we don't know whether sets of a certain size exist. There are lots of things about math we don't know, and (imo) could never know. Rather, the uneasy feeling is that our conception doesn't "determine" an answer to this question in the following much stronger sense:

There could be two collections of mathematical objects with different structures, each of which equally well satisfies our intuitive conception of set.

For, consider the hierarchy of classes (note: all sets are classes). There might be two different ways of painting the hierarchy to say at what point the items in it stop counting as sets. Our intuitive conception just seems to generate the hierarchy of classes, not to say when things in it stop being sets!

In contrast, in the case of the numbers, I might not know whether there are infinitely many twin primes, but any two objects satisfying the intuitive, second order, characterization of the numbers would have to have the same structure (and hence make all the same statements of arithmetic true).

Thus, our intuitive conception of set seems to be hopelessly vague about where the sets end. Hence, even if you are a realist about mathematical objects, we seem forced to understand set theory as making claims about features shared by everything that satisfies the intuitive conception of set, rather than as making claims about a unique object.

Questions:

1. If you buy the reasoning in the main body of this post, does it give an advantage to modal fictionalism? e.g. the modal fictionalist might say: "You already need to agree with us that doing set theory is a matter of reasoning about what objects satisfying the intuitive conception of set would have to be like. What does incurring extra commitment to the actual existence of mathematical objects (as opposed to their mere possibility) do for you?".

2. An alternative would be to reject the textbook view, and say EVERYTHING generated by the process above is a set. Hence, you couldn't talk about a class of sets. Would this be a problem?

3. [look up] Is it possible that all initial segments of the hierarchy of classes that reach up to a certain point are isomorphic? (I mean, the mere existence of one-to-one, membership preserving, function that's into but not onto -the identity- doesn't immediately guarantee that some *other*, more clever, function that IS an isomorphism)

[maybe you can prove this is not poss by using the fact that one initial segment would have extra ordinals, and this iso could be used to define an iso between ordinals of different sizes which is impossible]

4. Is there some weirdness about the idea that collections in general (whether they be sets, or classes) eventually give out - so there's no collection of all collections.

We could say there are sets, classes, classes2, classes 3 and so forth. This lets us say there's a class of all sets, and a class2 of all classes etc. But as far as collections in general we must admit that there's no collection of all collections, on pain of contradiction via Russell's paradox.

Well, I don't personally find this that problematic. It's a surprising fact about collections maybe, but mathematics often yields surprising results.

### Relations vs. Sets of Ordered Pairs

(Normally in math) a relation is defined to be a sets of ordered pairs.

But the `elementhood' relation between sets can't, itself, be a set of ordered pairs - since there can't be a set which contains each ordered pair of sets such that x is an element of y. [From the existence of such a set you could use the axiom of collection in ZF to derive the existence of a set of all sets, and hence the Russell set and contradiction.]

Therefore, not all relations (in the ordinary sense) are sets of ordered pairs (i.e. relations in the mathematical sense).

But the `elementhood' relation between sets can't, itself, be a set of ordered pairs - since there can't be a set which contains each ordered pair of sets

Therefore, not all relations (in the ordinary sense) are sets of ordered pairs (i.e. relations in the mathematical sense).

## Friday, October 30, 2009

### Thin Realism #2

Hmm on further reflection, `thin realism' is just Lumpism.

So see the essay above for why Lumpism is right and all that seductive stuff about the world having a logical structure/existence claims having a special epistemological status is wrong.

So see the essay above for why Lumpism is right and all that seductive stuff about the world having a logical structure/existence claims having a special epistemological status is wrong.

### "Thin Realism" - what could it be?

I always want to say "I think there are numbers - but I understand existence in a thin logical sense". But I feel kindof dishonest saying this. It's too much like the sleazy "Of course P - but I don't mean that in any deep philosophical way" which happens when Wittgensteinians get lazy.

So here are some actual concrete ways in which I differ from other platonists (i.e. other people who believe there are mathematical objects).

1. I don't think we need to posit numbers to explain how there can be unknowable mathematical facts.

2. I think fictionalism/if-then-sim is is perfectly coherent. We could have had a mathematical practice which was completely based around mathematical properties, and studying their relaitons to one another e.g.: `Insofar as anything heirarchy-of-sets-izes, it's mathematically necessary that it satisfies the contiuum hypothesis'.

And here's an attempt to say what having only "a thin logical notion of existence" means:

When we ask what objects exist, this is equivalent to asking what sentences with a given logical form (Ex) Fx are true. So far, this is just Quinean orthodoxy.

But now the question is: what makes a given sentence (say, of English) have a certain logical form?

Now, I think having existential form is just a matter of what inferences can be made with that sentence, and what other -contrasting- sentences are in the language. We cook up various logical categories in order to best represent, and exploit, patterns in which inferences are truth preserving. Furthermore, there's noting special about objects, and object expressions. Each component of a sentence (be it concept-word, object-word, connective or opporator) makes a systematic contribution to the truth conditions of the sentences it figures in (i.e. the class of possible situations where the sentence is true).

On this view, choices about the logical form of a sentence wind up not being very deep - the question is just what's the most elegant way to capture certain inference relations.

In contrast, (I propose) having a "thick" notion of objecthood and existence, means thinking that there IS something more than elegant summary of inference relations at stake when we decide how to cut sentences up into concepts and objects. For example, you might think

1. It's easy to learn statements which don't imply that any objects exist (all bachelors are unmarried), whereas learning statements that do imply the existence of at least one object (there are some bachelors) is harder.

2. The *world* has a logical structure too! - so the most elegant way of cutting up your sentences to capture inference relations might still be wrong, because it fails to respect the logical structure of the world.

[Oh yes, they are kindof seductive. More about why they are wrong later.]

So here are some actual concrete ways in which I differ from other platonists (i.e. other people who believe there are mathematical objects).

1. I don't think we need to posit numbers to explain how there can be unknowable mathematical facts.

2. I think fictionalism/if-then-sim is is perfectly coherent. We could have had a mathematical practice which was completely based around mathematical properties, and studying their relaitons to one another e.g.: `Insofar as anything heirarchy-of-sets-izes, it's mathematically necessary that it satisfies the contiuum hypothesis'.

And here's an attempt to say what having only "a thin logical notion of existence" means:

When we ask what objects exist, this is equivalent to asking what sentences with a given logical form (Ex) Fx are true. So far, this is just Quinean orthodoxy.

But now the question is: what makes a given sentence (say, of English) have a certain logical form?

Now, I think having existential form is just a matter of what inferences can be made with that sentence, and what other -contrasting- sentences are in the language. We cook up various logical categories in order to best represent, and exploit, patterns in which inferences are truth preserving. Furthermore, there's noting special about objects, and object expressions. Each component of a sentence (be it concept-word, object-word, connective or opporator) makes a systematic contribution to the truth conditions of the sentences it figures in (i.e. the class of possible situations where the sentence is true).

On this view, choices about the logical form of a sentence wind up not being very deep - the question is just what's the most elegant way to capture certain inference relations.

In contrast, (I propose) having a "thick" notion of objecthood and existence, means thinking that there IS something more than elegant summary of inference relations at stake when we decide how to cut sentences up into concepts and objects. For example, you might think

1. It's easy to learn statements which don't imply that any objects exist (all bachelors are unmarried), whereas learning statements that do imply the existence of at least one object (there are some bachelors) is harder.

2. The *world* has a logical structure too! - so the most elegant way of cutting up your sentences to capture inference relations might still be wrong, because it fails to respect the logical structure of the world.

[Oh yes, they are kindof seductive. More about why they are wrong later.]

### Unknowable Truths without Objects

I believe in mathematical objects, but I think the following appeal to them is dead wrong:

"The existence of mathematical objects is what allows there to be unknowable mathematical truths, whereas there are no unknowable logical or `conceptual' truths."

Corresponding to every unknowable AxFx statement in arithmetic, there's a purely modal statement, that's not ontologically commital, but would let you infer the arithmetical statement and hence must be equally unknowable, namely:

"It is impossible for there to be a machine on an infinite tape which a) acts in such and such such-and-such a physically specified way (here we have we list physical correlates of rules for some Turing machine program that checks every instance of the AxFx statement), and b) stops."

"The existence of mathematical objects is what allows there to be unknowable mathematical truths, whereas there are no unknowable logical or `conceptual' truths."

Corresponding to every unknowable AxFx statement in arithmetic, there's a purely modal statement, that's not ontologically commital, but would let you infer the arithmetical statement and hence must be equally unknowable, namely:

## Thursday, October 22, 2009

### Why I am not Carrie Jenkins

Carrie Jenkins' 2009 book Grounding Concepts: an Empirical Basis for Arithmetical Knowledge, proposes a theory that has a lot in common with my thesis project.

Both of us:

- want to give a naturalistic account of mathematical knowledge

- in particular, want to explain how humans can have managed get "good" combination of inference patterns that count as thinking true things about some domain of mathematical objects/having a coherent conception of what those objects must be like, rather than "bad", 'tonk' like patterns of reasoning.

-appeal to causal interactions with the world, to explain how we wind up with such combinations of inference dispositions.

BUT there are some important differences. Here's why (I claim) my view is better.

Jenkins' theory:

Jenkins winds up positing a whole bunch of controversial, and perhaps under-explained philosophical notions to account for how experience gives us good inference dispositions. She proposes that:

Experience has non-conceptual content which grounds our acquisition of concepts so as to help us form coherent ones. Then when we have a coherent concept of something like the numbers, we inspect it to see what what must be true of the numbers and reason correctly about them.

-The idea that there's non-conceptual content is a controversial point in philosophy of perception.

-The idea that experience can "ground" concept acquisition without playing a justificatory role in the conclusions drawn is not at all clear. What is this not-justificatory, but presumably not just causal relationship of grounding supposed to be? (Kant's notion of a posteriori concepts seems relevant, but that's none-too clear either).

-Finally, what is concept inspection, (presumably you don't literally visit the 3rd realm and see the concepts) and how is it supposed to work? Jenkins admits that this is an open question for further research.

My theory:

In contrast, my view gives a naturalistic account of mathematical knowledge that doesn't need any of this controversial philosophical machinery. I propose that:

Note that we don't need to posit any mysterious faculty of concept-inspection, or any controversial non-conceptual experience. All I appeal to is perfectly ordinary processes. People go from one sentence to another in a way that feels natural them (whether or not they are so fortunate as to be working with coherent concepts like +, rather than doing reasoning like Frege did about extensions) And when this natural-feeling reasoning leads to a surprise, they revise.

[Well, perhaps I'm also committed to the view that innate stuff about the brain makes some ways of revising more likely than others, and certain initial inference-dispositions more likely than others, in a way that doesn't make us always prefer theories that are totally hopeless at matching future experience. But you already need something like this even to explain how rats can learn that pushing a lever releases food, so I don't think this is very controversial.]

Both of us:

- want to give a naturalistic account of mathematical knowledge

- in particular, want to explain how humans can have managed get "good" combination of inference patterns that count as thinking true things about some domain of mathematical objects/having a coherent conception of what those objects must be like, rather than "bad", 'tonk' like patterns of reasoning.

-appeal to causal interactions with the world, to explain how we wind up with such combinations of inference dispositions.

BUT there are some important differences. Here's why (I claim) my view is better.

Jenkins' theory:

Jenkins winds up positing a whole bunch of controversial, and perhaps under-explained philosophical notions to account for how experience gives us good inference dispositions. She proposes that:

Experience has non-conceptual content which grounds our acquisition of concepts so as to help us form coherent ones. Then when we have a coherent concept of something like the numbers, we inspect it to see what what must be true of the numbers and reason correctly about them.

-The idea that there's non-conceptual content is a controversial point in philosophy of perception.

-The idea that experience can "ground" concept acquisition without playing a justificatory role in the conclusions drawn is not at all clear. What is this not-justificatory, but presumably not just causal relationship of grounding supposed to be? (Kant's notion of a posteriori concepts seems relevant, but that's none-too clear either).

-Finally, what is concept inspection, (presumably you don't literally visit the 3rd realm and see the concepts) and how is it supposed to work? Jenkins admits that this is an open question for further research.

My theory:

In contrast, my view gives a naturalistic account of mathematical knowledge that doesn't need any of this controversial philosophical machinery. I propose that:

People are disposed to go from seeing things, to saying things, to being surprised if we then see other things, in certain ways. When these inference dispositions lead us to be surprised, we tend to modify them.

Thus, it's not surprising that we should have wound up with the kind of combination of arithmetical inference dispositions + observational practices + ways of applying arithmetic to the actual world, which makes our expected applications of arithmetic work out.

For example: insofar as we had a conceptions of the numbers which included the expectation that facts about sums should mirror logical facts in a certain way, it's not surprising that we would up also believing the kinds of other claims about sums, which make the intended applications to logic work out (e.g. believing 2+2=4 not 2+2=5).

Note that we don't need to posit any mysterious faculty of concept-inspection, or any controversial non-conceptual experience. All I appeal to is perfectly ordinary processes. People go from one sentence to another in a way that feels natural them (whether or not they are so fortunate as to be working with coherent concepts like +, rather than doing reasoning like Frege did about extensions) And when this natural-feeling reasoning leads to a surprise, they revise.

[Well, perhaps I'm also committed to the view that innate stuff about the brain makes some ways of revising more likely than others, and certain initial inference-dispositions more likely than others, in a way that doesn't make us always prefer theories that are totally hopeless at matching future experience. But you already need something like this even to explain how rats can learn that pushing a lever releases food, so I don't think this is very controversial.]

## Tuesday, October 20, 2009

### Mathematical Concepts and Learning From Experience

I've been reading Susan Carey's new book on the development of concepts, which features a lot of interesting stuff about the development of children's reasoning about number. The last two chapters are philosophical though, and bring up an important point, which it had not occurred to me needed to be stressed:

Learning from experience need not take the form of someone explicitly forming a hypothesis, and then letting experience falsify it/doing induction to conclude the hypothesis is true.

If this were all that experience could do, it would be hopeless to appeal to it to help explain how we could get mathematical knowledge. For, plausibly, you only count as having the concept of number, once you are willing to make certain kinds of applications of facts about the numbers, reason about the numbers largely correctly etc. So, by the time that experience could falsify hypotheses containing the mature concept of number, you would already have to have lots of mathematical knowledge.

Instead, experience helps us correct and hone our mathematical reasoning all through the process of "developing a concept". How can this be?

Well, firstly, think about the way students are normally introduced to the concept of set. No one makes a hypothesis that there are sets, nor do math profs attempt to define sets in other terms. Rather the professor just demonstrates various ways of reasoning about sets, ways of using these claims to solve other mathematical problems etc. and gets the students to practice. Given this, the student's usage and intuitions conform more and more to standard claims about the sets, and eventually they count as having the concept of set.

I propose (and I think Carey would agree) that the original development of many concepts in mathematics works similarly, only with trial and experience playing the role of the teacher.

You start out not having the concept, and try various usages. Here, however, rather than having a professor to imitate, you just have your general creativity/trial and error/analogical reasoning to suggest ways of reasoning about "the X"s and then an ability to check whatever kinds of consequences and applications you expect at a given time. Often this kind of creative trying and analogical reasoning will turn out to fail in some way, such as leading to contrdiction, or underspecifying something important. But then you can correct it. Inconsistent reasoning about limits in the 19th century and sets in the early 20th would be examples of the former. And the kind of process of refinment of the notion of polygon in Lakotosh's Proofs and Refutations would be an example of the latter.

We try out various patterns of reasoning about the world (e.g. calling certain things Xs, trying to apply the analogue of good reasoning about one domain to another) -with perhaps a nudge from brain structures subject to evolution effecting which patterns we are likely to try- and experience corrects these inference patterns until they cohere enough that we count as genuinely having some new concept. And note that no conscious scientific reasoning must be assumed to start this process, all we need some disposition to go from seeing things to making noises to doing things, together with a playful/random/creative inclination to try extending those dispositions in various ways!

p.s. I haven't emphasized this point the past, because I think questions like 'when exactly does someone start having the concept of X?', don't generally cut psychology or metaphysics at their joints. I mean: when exactly did people start having the modern conception of atom? The interesting facts are surely facts about when people started accepting this or that idea atoms "atoms", or reasoning about "atoms" in this or that way. Coming up with a decision about exactly what amount of agreement with us is necessary for people to count as having the same concept is a matter of arbitrary boundary setting.

But I realize now that ignoring the whole issue of concepts can be confusing. So let me just say:

When I say mathematical knowledge is a joint product of mathematically shapted problems in nature, correction by experience, the wideness of the realm of mathematical facts and the relationship between use and meaning, "Correction by experience" doesn't just mean what happens when hypotheses consciously proposed by people who already count as having all the right mathematical concepts get refuted. Rather, "correction by experience" includes what happens when you are inclined to reason some way, you get to an unexpected conclusion, and then subsequently become disposed to draw slightly different inferences/feel less confident when engaging in some of the processes that lead you there. You might or might not count as revising some hypothesis, phrased in terms of fully coherent concepts, when you do this.

p.p.s. The idea that experience helps us form coherent mathematical concepts, (while not figuring in the justification of our beliefs) is also a central theme in Carrie Jenkins' 2009 Grounding Concepts: an empirical basis for arithmetical knowledge.

Learning from experience need not take the form of someone explicitly forming a hypothesis, and then letting experience falsify it/doing induction to conclude the hypothesis is true.

If this were all that experience could do, it would be hopeless to appeal to it to help explain how we could get mathematical knowledge. For, plausibly, you only count as having the concept of number, once you are willing to make certain kinds of applications of facts about the numbers, reason about the numbers largely correctly etc. So, by the time that experience could falsify hypotheses containing the mature concept of number, you would already have to have lots of mathematical knowledge.

Instead, experience helps us correct and hone our mathematical reasoning all through the process of "developing a concept". How can this be?

Well, firstly, think about the way students are normally introduced to the concept of set. No one makes a hypothesis that there are sets, nor do math profs attempt to define sets in other terms. Rather the professor just demonstrates various ways of reasoning about sets, ways of using these claims to solve other mathematical problems etc. and gets the students to practice. Given this, the student's usage and intuitions conform more and more to standard claims about the sets, and eventually they count as having the concept of set.

I propose (and I think Carey would agree) that the original development of many concepts in mathematics works similarly, only with trial and experience playing the role of the teacher.

You start out not having the concept, and try various usages. Here, however, rather than having a professor to imitate, you just have your general creativity/trial and error/analogical reasoning to suggest ways of reasoning about "the X"s and then an ability to check whatever kinds of consequences and applications you expect at a given time. Often this kind of creative trying and analogical reasoning will turn out to fail in some way, such as leading to contrdiction, or underspecifying something important. But then you can correct it. Inconsistent reasoning about limits in the 19th century and sets in the early 20th would be examples of the former. And the kind of process of refinment of the notion of polygon in Lakotosh's Proofs and Refutations would be an example of the latter.

We try out various patterns of reasoning about the world (e.g. calling certain things Xs, trying to apply the analogue of good reasoning about one domain to another) -with perhaps a nudge from brain structures subject to evolution effecting which patterns we are likely to try- and experience corrects these inference patterns until they cohere enough that we count as genuinely having some new concept. And note that no conscious scientific reasoning must be assumed to start this process, all we need some disposition to go from seeing things to making noises to doing things, together with a playful/random/creative inclination to try extending those dispositions in various ways!

p.s. I haven't emphasized this point the past, because I think questions like 'when exactly does someone start having the concept of X?', don't generally cut psychology or metaphysics at their joints. I mean: when exactly did people start having the modern conception of atom? The interesting facts are surely facts about when people started accepting this or that idea atoms "atoms", or reasoning about "atoms" in this or that way. Coming up with a decision about exactly what amount of agreement with us is necessary for people to count as having the same concept is a matter of arbitrary boundary setting.

But I realize now that ignoring the whole issue of concepts can be confusing. So let me just say:

When I say mathematical knowledge is a joint product of mathematically shapted problems in nature, correction by experience, the wideness of the realm of mathematical facts and the relationship between use and meaning, "Correction by experience" doesn't just mean what happens when hypotheses consciously proposed by people who already count as having all the right mathematical concepts get refuted. Rather, "correction by experience" includes what happens when you are inclined to reason some way, you get to an unexpected conclusion, and then subsequently become disposed to draw slightly different inferences/feel less confident when engaging in some of the processes that lead you there. You might or might not count as revising some hypothesis, phrased in terms of fully coherent concepts, when you do this.

p.p.s. The idea that experience helps us form coherent mathematical concepts, (while not figuring in the justification of our beliefs) is also a central theme in Carrie Jenkins' 2009 Grounding Concepts: an empirical basis for arithmetical knowledge.

Labels:
philosophy of language,
philosophy of math,
thesis

## Friday, October 16, 2009

### Empirical adequacy and truth in mathematics

The current weakest link in my thesis is this (IMO): how to connect merely having beliefs about mathematics that help us solve problems, and yield correct applications to concrete situations to having beliefs about mathematics that are reasonably reliable.

Couldn't totally false mathematical theories nonetheless be perfectly correct with regard to their concrete applications?

Also, even if our beliefs would indeed perfectly accurately describe some concrete objects, how can we count as refering to these objects, given that we have no causal contact with them?

My current best answer is this:

Think of human mathematicians as observing certain regularities (e.g. whenever there are 2 male rhymes and 2 female rhymes in a poem there are at least 4 rhymes all together), and then positing mathematical objects "the numbers" whose relationship to one another is supposed to echo these logical facts.

(This is a reasonable comparison because what we actually do is like this, in that we happily make inferences from a proof that "a+b=c" to the expectation that when there are a male rhymes and b female rhymes there are c rhymes all together. We behave as though we know there's this relationship between the numbers and logical facts, so it's not too much of a stretch to compare us to people who actually consciously posit that there is some collection of abstract objects whose features echo the relevant logical facts in this way.)

Now either there are abstract objects or not.

If there aren't abstracta (as the fictionalist thinks), the fact that mathematicians only care about structures makes it plausible to think of them as talking about the fiction in which there are such objects.

Thus, our abstract-object positing mathematicians will count as speaking about the fiction in which there are objects whose features echo the logical facts about addition in the intended way. They will also count as knowing lots of things about what's true in this fiction.

Also, note that insofar as these mathematicians propose new things that "intuitively must be true of the numbers" their intuitions will be disciplined and corrected by the fact that the relevant applications are expected, so there's a systematic force which will keep some degree of match between their claims about this fiction and what's actually true in this fiction.

If there are abstracta, then there are abstract objects with many different structures, in particular structures corresponding to every consistent first order theory (note this is even true if the only mathematical objects there are are sets! the completeness theorem guarantees that there are models of every such theory within the heirarchy sets). So there will be some collection of objects whose features match those expected by our positers (note that the positers only really care about structural features of "the numbers" not whether they are fundamental mathematical objects etc).

Now, how can our positers count as referring to some such objects? Well, as noted above, we have systematic mechanisms of belief revision which kick back and insure that their claims about the numbers must match with logical facts, and hence with the real facts about these collections of suitable abstracta. Just as looking at llamas helps ensure that certain kinds of false beliefs about llamas which you might form would be corrected, applying arithmetic insures that certain kinds of false general beliefs you might form about the numbers would be corrected (those which lead to false consequences about sums).

Thus, we have a situation where people not only have many beliefs that are true about the numbers, and the tendency to make many truth-preserving inferences, but also where these beliefs have a certain amount of modal stability (many kinds of false beliefs would tend to be corrected). Even Fodor thinks that making correct inferences with or is sufficient to allow or to make the right kind of contribution to the truth value of your sentences, so why should the same thing not apply to talk about numbers, given that we now have not only many good inferences but this kind of mechanism of correction which improves the fit between our beliefs about the numbers and the numbers?

You might still worry that there will be so many mathematical objects which have all the features which we expect the numbers to have - how can we count as referring to any one such structure, given that our use fits all of them equally well? And if we don't uniquely pick out a structure, how can our words count as refering and being meaningful? But note that to the extent that our use of the word "the numbers" is somehow ambiguous between e.g. different collections of sets, our use of the word "human bodies" would seem to be equally ambiguous between e.g. open vs. closed sets of spacetime points. So either meaningfully talking about objects is compatible with some amount of ambiguity, or the above kind of reasoning doesn't suffice to establish ambiguity.

Couldn't totally false mathematical theories nonetheless be perfectly correct with regard to their concrete applications?

Also, even if our beliefs would indeed perfectly accurately describe some concrete objects, how can we count as refering to these objects, given that we have no causal contact with them?

My current best answer is this:

Think of human mathematicians as observing certain regularities (e.g. whenever there are 2 male rhymes and 2 female rhymes in a poem there are at least 4 rhymes all together), and then positing mathematical objects "the numbers" whose relationship to one another is supposed to echo these logical facts.

(This is a reasonable comparison because what we actually do is like this, in that we happily make inferences from a proof that "a+b=c" to the expectation that when there are a male rhymes and b female rhymes there are c rhymes all together. We behave as though we know there's this relationship between the numbers and logical facts, so it's not too much of a stretch to compare us to people who actually consciously posit that there is some collection of abstract objects whose features echo the relevant logical facts in this way.)

Now either there are abstract objects or not.

If there aren't abstracta (as the fictionalist thinks), the fact that mathematicians only care about structures makes it plausible to think of them as talking about the fiction in which there are such objects.

Thus, our abstract-object positing mathematicians will count as speaking about the fiction in which there are objects whose features echo the logical facts about addition in the intended way. They will also count as knowing lots of things about what's true in this fiction.

Also, note that insofar as these mathematicians propose new things that "intuitively must be true of the numbers" their intuitions will be disciplined and corrected by the fact that the relevant applications are expected, so there's a systematic force which will keep some degree of match between their claims about this fiction and what's actually true in this fiction.

If there are abstracta, then there are abstract objects with many different structures, in particular structures corresponding to every consistent first order theory (note this is even true if the only mathematical objects there are are sets! the completeness theorem guarantees that there are models of every such theory within the heirarchy sets). So there will be some collection of objects whose features match those expected by our positers (note that the positers only really care about structural features of "the numbers" not whether they are fundamental mathematical objects etc).

Now, how can our positers count as referring to some such objects? Well, as noted above, we have systematic mechanisms of belief revision which kick back and insure that their claims about the numbers must match with logical facts, and hence with the real facts about these collections of suitable abstracta. Just as looking at llamas helps ensure that certain kinds of false beliefs about llamas which you might form would be corrected, applying arithmetic insures that certain kinds of false general beliefs you might form about the numbers would be corrected (those which lead to false consequences about sums).

Thus, we have a situation where people not only have many beliefs that are true about the numbers, and the tendency to make many truth-preserving inferences, but also where these beliefs have a certain amount of modal stability (many kinds of false beliefs would tend to be corrected). Even Fodor thinks that making correct inferences with or is sufficient to allow or to make the right kind of contribution to the truth value of your sentences, so why should the same thing not apply to talk about numbers, given that we now have not only many good inferences but this kind of mechanism of correction which improves the fit between our beliefs about the numbers and the numbers?

You might still worry that there will be so many mathematical objects which have all the features which we expect the numbers to have - how can we count as referring to any one such structure, given that our use fits all of them equally well? And if we don't uniquely pick out a structure, how can our words count as refering and being meaningful? But note that to the extent that our use of the word "the numbers" is somehow ambiguous between e.g. different collections of sets, our use of the word "human bodies" would seem to be equally ambiguous between e.g. open vs. closed sets of spacetime points. So either meaningfully talking about objects is compatible with some amount of ambiguity, or the above kind of reasoning doesn't suffice to establish ambiguity.

### Beliefs, natural kinds, and causation

I think that having the belief that there's a rabbit in the yard is a matter of having some suitable combination of dispositions to action, dispositions to experience qualia, relations to the external world ect. (roughly: those that would make an omniscient Davidsonain charitable interpreter attribute you the belief that there's a rabbit in the yard)

But (I think) exactly which dispositions etc. are required is quite complicated, and in some respects arbitrary (e.g. verbal behavior that would equally well track the facts about rabbits and undetached rabbit parts counts as referring to rabbits).

Does this view that 'believing that there's a rabbit in the yard' may not pick out any supremely natural combination of mental states, prevent me from saying that beliefs can cause things?

No.

The facts about what physical combinations of stuff count as a baseball are equally complicated and arbitrary. But no one would deny that baseballs can figure in causal explanations e.g. the window broke because someone threw a baseball at it.

Just as the somewhat arbitrary fact that a regulation baseball has to have a diameter of between two and seven-eighths inches and three inches doesn't prevent talk of baseballs from figuring in causal claims, the somewhat arbitrary fact that it's easier to count as referring to/thinking about rabbits rather than undetached rabbit parts doesn't prevent talk of beliefs from figuring in causal claims.

But (I think) exactly which dispositions etc. are required is quite complicated, and in some respects arbitrary (e.g. verbal behavior that would equally well track the facts about rabbits and undetached rabbit parts counts as referring to rabbits).

Does this view that 'believing that there's a rabbit in the yard' may not pick out any supremely natural combination of mental states, prevent me from saying that beliefs can cause things?

No.

The facts about what physical combinations of stuff count as a baseball are equally complicated and arbitrary. But no one would deny that baseballs can figure in causal explanations e.g. the window broke because someone threw a baseball at it.

Just as the somewhat arbitrary fact that a regulation baseball has to have a diameter of between two and seven-eighths inches and three inches doesn't prevent talk of baseballs from figuring in causal claims, the somewhat arbitrary fact that it's easier to count as referring to/thinking about rabbits rather than undetached rabbit parts doesn't prevent talk of beliefs from figuring in causal claims.

### Explaining vs. justifying beliefs

Suppose I say that there's a fire in my room, and then you ask me why I believe there's a fire in my room. I could give a causal explanation for my belief (e.g. 'Some light bounced off a fire and this hit my eyes causing such-and-such brain changes in me) or I could try to justify the claim (e.g.'I seem to see a fire, and I don't tend to hallucinate').

These are two very different things! Thus, I think it's totally wrong to assume that the (potentially infinite series of) other beliefs I might express if asked to justify my claim that there's a fire in my room, somehow figured in causing the belief. If anything, these extra beliefs are probably simultanious results of a common cause, namely the fire.

Fire

-causes->

Light hits my retina

-simultaniously-causes->

I believe there's a fire.

I believe that I seem to see a fire.

I believe that I seem to seem to see a fire.

...

This is not to deny that beliefs CAN cause beliefs though, as in the case of conscious, Sherlock-Holmes-style chains of inference. Also the absence of certain beliefs might be necessary for the production of other beliefs (e.g. the absence of the belief that I have taken fire-hallucination causing drugs, might be required for causal stimulation by light from a fire to cause me to form the belief that there's a fire)

These are two very different things! Thus, I think it's totally wrong to assume that the (potentially infinite series of) other beliefs I might express if asked to justify my claim that there's a fire in my room, somehow figured in causing the belief. If anything, these extra beliefs are probably simultanious results of a common cause, namely the fire.

Fire

-causes->

Light hits my retina

-simultaniously-causes->

I believe there's a fire.

I believe that I seem to see a fire.

I believe that I seem to seem to see a fire.

...

This is not to deny that beliefs CAN cause beliefs though, as in the case of conscious, Sherlock-Holmes-style chains of inference. Also the absence of certain beliefs might be necessary for the production of other beliefs (e.g. the absence of the belief that I have taken fire-hallucination causing drugs, might be required for causal stimulation by light from a fire to cause me to form the belief that there's a fire)

## Thursday, October 15, 2009

### Conventionalim and Realism - are they incompatible?

Conventionalism and Realism are often presented as alternatives (for example, I recently heard a talk about whether Frege should be understood as a realist or a conventionalist about number). But (at least on my own best understanding of what `conventionalism' might be) it's not at all clear that this is the case.

I'm tempted to understand realism and conventionalism as follows, in which case (I am going to argue) the two are perfectly compatible.

You are a realist about Xs iff you think there really are some Xs.

You are a conventionalist about Xs iff you think that we can reasonably address boundary disputes about just what is to count as an X, or what properties Xs are supposed to have by imposing arbitrary conventions.

Here's an example. I think there really are living things. But I don't think the distinction between living and non-living things is such an incredibly natural kind that much would be lost by stipulating some slight re-definition of "alive" that clearly entails viruses are/aren't "alive". Hence, (by the above definition) I'm both a realist and a conventionalist about living things.

Maybe compatibility between realism about Xs and conventionalism about certain facts about Xs only applies conventionalism with regard to tiny boundary disputes about the extension of the concept X? But here's another example where the extension of X will be completely different depending on what stipulation we make.

I'm a realist about human bodies, in that I think that there are indeed human bodies. But should human bodies be identified with *open* or *closed* sets of space time points? This issue, is (just like the viruses question above) one that it seems perfectly natural to settle by stipulation.

Thus, I don't buy the argument that Frege's willingness to allow some questions about what the numbers are to be determined by convention (assuming, as the speaker suggested, he was indeed so willing) shows that he's an anti-realist about about number in anything like the ordinary sense of the term.

[edit: To put the point another way - you can be a realist about the all items that potentially count as numbers but think it's vague which things exactly do count as numbers.

Taking the extension of a concept to be somewhat arbitrary/conventional doesn't require thinking that the objects which are candidates to fall under that concept are somehow unreal]

I'm tempted to understand realism and conventionalism as follows, in which case (I am going to argue) the two are perfectly compatible.

You are a realist about Xs iff you think there really are some Xs.

You are a conventionalist about Xs iff you think that we can reasonably address boundary disputes about just what is to count as an X, or what properties Xs are supposed to have by imposing arbitrary conventions.

Here's an example. I think there really are living things. But I don't think the distinction between living and non-living things is such an incredibly natural kind that much would be lost by stipulating some slight re-definition of "alive" that clearly entails viruses are/aren't "alive". Hence, (by the above definition) I'm both a realist and a conventionalist about living things.

Maybe compatibility between realism about Xs and conventionalism about certain facts about Xs only applies conventionalism with regard to tiny boundary disputes about the extension of the concept X? But here's another example where the extension of X will be completely different depending on what stipulation we make.

I'm a realist about human bodies, in that I think that there are indeed human bodies. But should human bodies be identified with *open* or *closed* sets of space time points? This issue, is (just like the viruses question above) one that it seems perfectly natural to settle by stipulation.

Thus, I don't buy the argument that Frege's willingness to allow some questions about what the numbers are to be determined by convention (assuming, as the speaker suggested, he was indeed so willing) shows that he's an anti-realist about about number in anything like the ordinary sense of the term.

[edit: To put the point another way - you can be a realist about the all items that potentially count as numbers but think it's vague which things exactly do count as numbers.

Taking the extension of a concept to be somewhat arbitrary/conventional doesn't require thinking that the objects which are candidates to fall under that concept are somehow unreal]

### On Woodin on 'explaining' the consistency of large cardinal axioms

One of the major highlights of MWPM was getting to hear eminent set theorist Hugh Woodin. He gave a great talk about his program of investigating large cardinal axioms, looking for a characterization of the sets that's as informative as our understanding of the numbers etc. I didn't quite buy his case for the truth of large cardinal axioms, for reasons which I present here with some hesitation (I mean, who is more likely to be wrong about explanation in set theory - me or Woodin?)

One of Woodin's main arguments seems to be that if you don't believe in large cardinals, you can't explain the fact that various large cardinal axioms turn out to be consistent. I'm not sure whether the explanation required here is mathematical (how come there's this pattern whereby all these different con sentences happen to be true?), or epistemic (how come thinking about large cardinals/the possibility of non-trivially mapping the universe into itself leaving a certain initial segment fixed reliably leads us to consistent theories, if it's not the case that in so thinking, we are seeing how the universe of sets actually is). But:

-If the explanation desired is mathematical, then it seems like there might be a purely number theoretic explanation for each of the con(ZF+{some large cardinal axiom}) statements. Why wouldn't this be explanation enough? (Indeed, I thought [?] each large cardinal axioms implied the existence of the smaller large cardinals, so giving a number theoretic explanation for some strongest axiom might simultaniously explain the others?)

-If the explanation desired is epistemic, you might think that people are reasoning about what's metaphysically/mathematically POSSIBLE - e.g. that a structure satisfying ZF and containing a large cardinal is metaphysically possible. We clearly do have mathematical/metaphysical intuitions about when a given body of claims are incoherent/couldn't possibly all be true. And, claims that are logically inconsistent are paradigmatic cases of claims that couldn't all be true. What's possible has to be *at least* logically consistent.

Thus, one might explain the fact we've got whole strings of large cardinal axioms A that are consistent by saying not that mathematicians saw that the sets really were A, but that they saw that objects *could possibly be* as required by A.

One of Woodin's main arguments seems to be that if you don't believe in large cardinals, you can't explain the fact that various large cardinal axioms turn out to be consistent. I'm not sure whether the explanation required here is mathematical (how come there's this pattern whereby all these different con sentences happen to be true?), or epistemic (how come thinking about large cardinals/the possibility of non-trivially mapping the universe into itself leaving a certain initial segment fixed reliably leads us to consistent theories, if it's not the case that in so thinking, we are seeing how the universe of sets actually is). But:

-If the explanation desired is mathematical, then it seems like there might be a purely number theoretic explanation for each of the con(ZF+{some large cardinal axiom}) statements. Why wouldn't this be explanation enough? (Indeed, I thought [?] each large cardinal axioms implied the existence of the smaller large cardinals, so giving a number theoretic explanation for some strongest axiom might simultaniously explain the others?)

-If the explanation desired is epistemic, you might think that people are reasoning about what's metaphysically/mathematically POSSIBLE - e.g. that a structure satisfying ZF and containing a large cardinal is metaphysically possible. We clearly do have mathematical/metaphysical intuitions about when a given body of claims are incoherent/couldn't possibly all be true. And, claims that are logically inconsistent are paradigmatic cases of claims that couldn't all be true. What's possible has to be *at least* logically consistent.

Thus, one might explain the fact we've got whole strings of large cardinal axioms A that are consistent by saying not that mathematicians saw that the sets really were A, but that they saw that objects *could possibly be* as required by A.

### Is Realist Carnap Trivial?

There were two neat talks about Carnap at the MWPM conference this weekend, which got me thinking. I like Carnap, I like realism (well, more like, I don't understand anti-realism) so I like to try to give a realist reading to all Carnap's stuff about the principle of tolerance. But my current best realist Carnap also seems kindof trivial.

Realist Carnap:

1. You can state truths in different languages, even languages which give different definitions/meanings to the same string of letters e.g. "atom".

2. Sometimes if you disagree with someone about "Xhood" (e.g. if you disagree about the question "viruses alive?") you can step back and use other facts that you agree on to characterize the situation, (e.g. viruses reproduce themselves in such and such a way, when they are dormant they don't do so and so, if we stipulate that something is alive iff it Xs then we will get the consequence that all physical things count as being alive) and decide what kind of stipulative definition of "alive" would be most useful to use in this context. Then you just go forward, using the word "alive" in this new sense, and not worrying about whether it was the same as what either of you originally meant by "alive".

Doing this lets you go on with biology without getting bogged down in likely unresolvable questions about whether viruses are alive.

BUT sometimes no stipulative definition given to a term will be as interesting as the one you started with (e.g. if you tried to re-stipulate the meaning of mathematical terms to avoid controversy about whether "there are infinitely many twin primes").

AND sometimes you disagree with your opponents so much about math/logic, that you can't agree with them about what the consequences of a given stipulation would be.

Realist Carnap:

1. You can state truths in different languages, even languages which give different definitions/meanings to the same string of letters e.g. "atom".

2. Sometimes if you disagree with someone about "Xhood" (e.g. if you disagree about the question "viruses alive?") you can step back and use other facts that you agree on to characterize the situation, (e.g. viruses reproduce themselves in such and such a way, when they are dormant they don't do so and so, if we stipulate that something is alive iff it Xs then we will get the consequence that all physical things count as being alive) and decide what kind of stipulative definition of "alive" would be most useful to use in this context. Then you just go forward, using the word "alive" in this new sense, and not worrying about whether it was the same as what either of you originally meant by "alive".

Doing this lets you go on with biology without getting bogged down in likely unresolvable questions about whether viruses are alive.

BUT sometimes no stipulative definition given to a term will be as interesting as the one you started with (e.g. if you tried to re-stipulate the meaning of mathematical terms to avoid controversy about whether "there are infinitely many twin primes").

AND sometimes you disagree with your opponents so much about math/logic, that you can't agree with them about what the consequences of a given stipulation would be.

## Friday, October 9, 2009

### kindof a joke: an ad for my solution to the access problem

After a really helpful but sad conversation with KY, I realized that I really haven't done enough to make clear to casual readers just what my thesis project (and paper on the access problem) are trying to do.

This lead to me making the following little advertisement.

addendum:

I just heard that Poincare thought that we evolve and/or prune our beliefs to believe what's advantagious, not what's true. In contrast, my thesis suggests that in evolving/pruning our beliefs to believe what's advantagious, we wind up believing (mostly) the truth about some suitable aspect of objective mathematical reality - but this doesn't make our mathematical beliefs a posteriori!

This lead to me making the following little advertisement.

addendum:

I just heard that Poincare thought that we evolve and/or prune our beliefs to believe what's advantagious, not what's true. In contrast, my thesis suggests that in evolving/pruning our beliefs to believe what's advantagious, we wind up believing (mostly) the truth about some suitable aspect of objective mathematical reality - but this doesn't make our mathematical beliefs a posteriori!

## Thursday, October 1, 2009

### Skepticism and Normativity

Thinking you have figured out how to solve age old philosophical problems, very quickly, is generally a bad sign.

Nonetheless, ever since TFing Intro Epistemology last semester, I find myself feeling more and more that worries about external world/other minds/memory skepticism involve an incoherent melange of a sharp proof theoretic question, together with a fuzzy normative question.

The proof theoretic question is something like:

- Can you prove the external worlds exist, starting from premises that contain only necessary truths?

- Can you prove memory is reliable starting from premises containing only necessary truths and true statements about current experience?

[Where "prove" can be cashed out in various formal ways - e.g. first order logic, or modal logic, or intuitionistic logic - to yield different variants of the question.]

And the normative question is:

When is it empistemically OK to assume premises in a given set X, given that I cannot prove them (in logic L) from premises in set Y?

Once we've made this distinction, and noted that some premises which one might assume are true, and others false, the normative question looses much of its interest (at least for me).

Furthermore, we can point out to the skeptic who e.g. believes in the reality of past experiences but not in the external world, that his position appears exactly analogous to our own. We can challenge the skeptic to provide any kind of distinction between what's OK to assume vs. not OK to assume that looks remotely principled enough to motivate our revising our judgments on the subject.

"In what sense," we can say to the skeptic, "do you know that e.g. there are infinitely many primes, or that it's impossible to know things about the external world, such that I don't also (by those very same standards) count as knowing that I have a hand?. In both cases, there are more radical skeptics whom we cannot persuade. Thus, in saying that you know, but I do not, you seem to be just stomping your foot and making the unmotivated value judgment that it's OK to assume what you assume and not OK to assume what I assume.

Why should I be more confident that you have correct moral beliefs about what it's OK to assume, than that I have correct descriptive beliefs about whether I have a hand?"

Nonetheless, ever since TFing Intro Epistemology last semester, I find myself feeling more and more that worries about external world/other minds/memory skepticism involve an incoherent melange of a sharp proof theoretic question, together with a fuzzy normative question.

The proof theoretic question is something like:

- Can you prove the external worlds exist, starting from premises that contain only necessary truths?

- Can you prove memory is reliable starting from premises containing only necessary truths and true statements about current experience?

[Where "prove" can be cashed out in various formal ways - e.g. first order logic, or modal logic, or intuitionistic logic - to yield different variants of the question.]

And the normative question is:

When is it empistemically OK to assume premises in a given set X, given that I cannot prove them (in logic L) from premises in set Y?

Once we've made this distinction, and noted that some premises which one might assume are true, and others false, the normative question looses much of its interest (at least for me).

Furthermore, we can point out to the skeptic who e.g. believes in the reality of past experiences but not in the external world, that his position appears exactly analogous to our own. We can challenge the skeptic to provide any kind of distinction between what's OK to assume vs. not OK to assume that looks remotely principled enough to motivate our revising our judgments on the subject.

"In what sense," we can say to the skeptic, "do you know that e.g. there are infinitely many primes, or that it's impossible to know things about the external world, such that I don't also (by those very same standards) count as knowing that I have a hand?. In both cases, there are more radical skeptics whom we cannot persuade. Thus, in saying that you know, but I do not, you seem to be just stomping your foot and making the unmotivated value judgment that it's OK to assume what you assume and not OK to assume what I assume.

Why should I be more confident that you have correct moral beliefs about what it's OK to assume, than that I have correct descriptive beliefs about whether I have a hand?"

### Stipulation and Easy Mathematical Knowledge

As noted before, I think we get (mature, human) mathematical knowledge by benefiting from caual interactions with the world that lead us to find "coherent" combinations of mathematical statements obvious, and that our acceptance of these coherent stipulations helps determine the meaning of our words in such a way that these stipulations express truths in our language.

But this suggests a question. (Or at least, related views suggested a question to Shapiro and Ebert) Suppose someone accepts ZF and just guesses some elaborate provable truth T, and then stipulates {ZF+T}. Do they count as knowing that T? Doesnt my view commit me to thinking that they do?

The combination of ZF+T is indeed coherent, so I think that people who naturally found T just as obvious as people with mainstream mathematical intuitions find ZF would count as expressing mathematical truths, and indeed knowing that T. (see my paper the Doctoroids for more on this, though I wrote it before seeing the Shapiro).

But what about someone who feels uncertain about whether T, but tries to just stipulate it?

In general, I think, such a person won't count as having knowledge, because they are taking what is (relative to their current state of knowledge) an excessive epistemic risk - and hence they lack justification for their true beliefs. If their current mathematical faculties and other experiences do not give sufficient reason think that adding T to their beliefs would lead to a logically consistent system, they also lack sufficient reason to think that adding T would lead to a system of axioms that correctly describe some realm of mathematical reality. Thus, they are being epistemically irresponcible in adding this axiom.

However, if their current mathematical and other reasoning does suggest (though not prove) that adding T would be consistent, they can be justified in adding T as an axiom (although they may not be justified in assuming that once they have e.g. stipulated the axiom of choice to be true, they are talking about the same mathematical structure as they originally were).

But this suggests a question. (Or at least, related views suggested a question to Shapiro and Ebert) Suppose someone accepts ZF and just guesses some elaborate provable truth T, and then stipulates {ZF+T}. Do they count as knowing that T? Doesnt my view commit me to thinking that they do?

The combination of ZF+T is indeed coherent, so I think that people who naturally found T just as obvious as people with mainstream mathematical intuitions find ZF would count as expressing mathematical truths, and indeed knowing that T. (see my paper the Doctoroids for more on this, though I wrote it before seeing the Shapiro).

But what about someone who feels uncertain about whether T, but tries to just stipulate it?

In general, I think, such a person won't count as having knowledge, because they are taking what is (relative to their current state of knowledge) an excessive epistemic risk - and hence they lack justification for their true beliefs. If their current mathematical faculties and other experiences do not give sufficient reason think that adding T to their beliefs would lead to a logically consistent system, they also lack sufficient reason to think that adding T would lead to a system of axioms that correctly describe some realm of mathematical reality. Thus, they are being epistemically irresponcible in adding this axiom.

However, if their current mathematical and other reasoning does suggest (though not prove) that adding T would be consistent, they can be justified in adding T as an axiom (although they may not be justified in assuming that once they have e.g. stipulated the axiom of choice to be true, they are talking about the same mathematical structure as they originally were).

### bookclub: Wright and Hale's comeback

Last time, I blogged about McFarlane's criticisms of Wright and Hale - here's what I think of their response, published in the same journal.

W+H say that the difference between stipulating the axioms of PA vs. stipulating Hume's principle is that the former stipulation fails their requirements by being "arrogant". So, what exactly is supposed to be arrogant about this stipulation:

A(x) x=x iff there is an object 0, and .... (the axioms of PA)

that doesn't also apply to all the instances of the schema below?

the Fs are equinumerous with the Gs iff the number of Fs = the number of Gs

I confess that I'm not sure exactly what W+H's answer to this, even after multiple readings of the paper. But here are some things I see in the paper that don't seem to work as explanations for why Hume's principle escapes arrogance:

1. Specification of truth conditions for all atomic sentences:

One criterion that comes up is that the latter collection of stipulations gives truth conditions to all atomic sentences in the language of numbers, in terms which someone who doesn't yet understand number talk can understand.

But then I don't see why the advocate of just stipulating the PA axioms couldn't break their stipulation down into a similar infinite series of stipulations, each of which is equated to a logical truth as follows and so on for all the countably many atomic sentences derivable from PA e.g.

A(x) x=x iff 0 is a number

A(x) x=x iff 1 is a sucessor of 0

..

You might worry that we can't intend to make any such stipulation, but note that both series of axioms will be recursively axiomatizable, (we aren't enumerating all the truths of arithmetic, just all the ATOMIC truths). So it's hard to see how we could be capable of intending all the instances of W+H's schema, but not the PA schema.

2. RHS can generate understanding of the new terms

Somehow W+H think that saying 'Let 'the numbers' name to some cannonical collection of objects which relate to each other in such a way that there's a "0 object" it stands in the successor relation to other objects etc' would not suffice to let someone who did not previously have the concept of number understand it, whereas hume's principle does.

But I don't buy the argument for this. All W+H say is that a) something else, namely the ramisfication of the PA stipulation, would have to be couched in second order logic and hence presuppose something like understanding of the numbers and b) the PA stipulation just adds to the ramsification by giving labels to the particular objects that stand in the relations stipulated by the ramsification.

They then seem to conclude that making the stipulation of the axioms of PA cannot suffice for understanding. This would be a plausible argument if they first showed that the ramsification doesn't suffice to express the concept of number, and then argued that the straight version doesn't add anything. But what they actually argued was that the ramsification would be incomprehensible to someone who didn't already have something more powerful than the concept of number. So, the fact that the straight stipulation of the PA axioms doesn't *add anything* to the ramsification, doesn't suffice to show that it couldn't be used to give someone understanding of the concept of number.

(which basically asserts that there are things satisfying the PA axioms using the language of second order logic).

W+H's argument seems to be exactly analogous to saying:

The stipulation 'a bachelor is a unmarried man' doesn't suffice to introduce someone to the concept of bachelorhood, because it doesn't add anything to 'a bachelor is an unmarried man who is either a happy bachelor or a sad bachelor' and that statement cannot be used to introduce someone to the concept of bachelorhood.

And this is surely not a good argument.

p.s. I don't think the notion of 'what it takes to gives someone a concept who doesn't previously have it' is well defined - like psycholgoically people can know related concepts, be susceptable to conditioning in various ways etc. Just giving some people one example of gouchery would suffice to make them understand the word, whereas you could say any number of things to a rock, and this wouldn't teach the rock the concept.

W+H say that the difference between stipulating the axioms of PA vs. stipulating Hume's principle is that the former stipulation fails their requirements by being "arrogant". So, what exactly is supposed to be arrogant about this stipulation:

A(x) x=x iff there is an object 0, and .... (the axioms of PA)

that doesn't also apply to all the instances of the schema below?

the Fs are equinumerous with the Gs iff the number of Fs = the number of Gs

I confess that I'm not sure exactly what W+H's answer to this, even after multiple readings of the paper. But here are some things I see in the paper that don't seem to work as explanations for why Hume's principle escapes arrogance:

1. Specification of truth conditions for all atomic sentences:

One criterion that comes up is that the latter collection of stipulations gives truth conditions to all atomic sentences in the language of numbers, in terms which someone who doesn't yet understand number talk can understand.

But then I don't see why the advocate of just stipulating the PA axioms couldn't break their stipulation down into a similar infinite series of stipulations, each of which is equated to a logical truth as follows and so on for all the countably many atomic sentences derivable from PA e.g.

A(x) x=x iff 0 is a number

A(x) x=x iff 1 is a sucessor of 0

..

You might worry that we can't intend to make any such stipulation, but note that both series of axioms will be recursively axiomatizable, (we aren't enumerating all the truths of arithmetic, just all the ATOMIC truths). So it's hard to see how we could be capable of intending all the instances of W+H's schema, but not the PA schema.

2. RHS can generate understanding of the new terms

Somehow W+H think that saying 'Let 'the numbers' name to some cannonical collection of objects which relate to each other in such a way that there's a "0 object" it stands in the successor relation to other objects etc' would not suffice to let someone who did not previously have the concept of number understand it, whereas hume's principle does.

But I don't buy the argument for this. All W+H say is that a) something else, namely the ramisfication of the PA stipulation, would have to be couched in second order logic and hence presuppose something like understanding of the numbers and b) the PA stipulation just adds to the ramsification by giving labels to the particular objects that stand in the relations stipulated by the ramsification.

They then seem to conclude that making the stipulation of the axioms of PA cannot suffice for understanding. This would be a plausible argument if they first showed that the ramsification doesn't suffice to express the concept of number, and then argued that the straight version doesn't add anything. But what they actually argued was that the ramsification would be incomprehensible to someone who didn't already have something more powerful than the concept of number. So, the fact that the straight stipulation of the PA axioms doesn't *add anything* to the ramsification, doesn't suffice to show that it couldn't be used to give someone understanding of the concept of number.

(which basically asserts that there are things satisfying the PA axioms using the language of second order logic).

W+H's argument seems to be exactly analogous to saying:

The stipulation 'a bachelor is a unmarried man' doesn't suffice to introduce someone to the concept of bachelorhood, because it doesn't add anything to 'a bachelor is an unmarried man who is either a happy bachelor or a sad bachelor' and that statement cannot be used to introduce someone to the concept of bachelorhood.

And this is surely not a good argument.

p.s. I don't think the notion of 'what it takes to gives someone a concept who doesn't previously have it' is well defined - like psycholgoically people can know related concepts, be susceptable to conditioning in various ways etc. Just giving some people one example of gouchery would suffice to make them understand the word, whereas you could say any number of things to a rock, and this wouldn't teach the rock the concept.

Subscribe to:
Posts (Atom)