Maybe I'm missing something here...
Quine suggests that we adopt first order logic as the language for science. But, first order logic can't capture the notion of 'finitely many Fs'. It can only express the claim that there are n Fs for some particular n. Yet, we do understand the notion of finite, and use it in reasoning (e.g. if there are finitely many people at Alice's party, there is one person such that no one is taller than him) and potentially in science. Hence, we should not adopt first order logic as the language for science.
[The standard way to try to get around this, is by talking about relations to abstract objects like the numbers (There are finitely many Fs if there's a 1-1 map from the set of things that are F to the some set theoretic surrogate for the numbers). This would give you the right extension, if your scientific hypothesis could say that something had the structure of the numbers. But first order logic can only state axioms, like PA which don't completely pin down the structure of the numbers. Any first order axioms which you use to characterize the numbers will have non-standard models. This is Putnam's point in his celebrated model theoretic argument against realism. So, if you take this strategy, rather than saying that there are finitely many people at Alice's party, you can only say that the number of people is equinumerous items that satisfy a certain collection of first order axioms. And this does not rule out non-standard models.]
Monday, June 28, 2010
Is Math Logic?
Is mathematics just a branch of logic? This is the first question many people ask about philosophy of math (sometimes with a vague idea that a) it would solve some kind of metaphysical or epistemological problems if math were logic or b) it's been proved that math isn't logic). Well, unsurprisingly, the answer depends on what you mean by 'logic'. Here are some different senses of the word 'logic' that one might have in mind.
1. first order logic
2. fully general principles of good reasoning
3. a collection of fully general principles which a person could in principle learn all of, and apply
4. principles of good reasoning that aren't ontologically committal
5. principles of good reasoning that no sane person could doubt
The sense in which it has been proved that math isn't logic is (to put things as briefly as possible) this: You can't program a computer to spit out all and only the truths of number theory.
This fact directly tells us that the mathematical truths are not all logical truths, if we understand "logic" in sense #1 - since we *can* program a computer to list off all the truths of first order logic. And it also tells us that the mathematical truths aren't all logical truths in sense #3 or #5 either - if we are willing to make the plausible assumption that human reasoning can be well modeled in this respect by some computer program. For if all human reasoning can be captured by a program, then so can all human reasoning from some starting finite collection of humanly applicable principles, and so can the portion of human reasoning that no sane person could doubt (to the extent that this is well defined).
However, if by "logic" you just mean #2 -fully general principles of reasoning that would be generally valid (whether or not one could pack all of these principles into some finite human brian)- then we have no reason to think that math isn't logic. We expect the kinds of logical and inductive reasoning we use in number theory (e.g. mathematical induction) to work for other things (especially for things like time, which we take to have the same structure as the numbers). If Jim didn't have a bike on day 1, and if, for each subsequent day he could only get a bike if he had already had a bike on the previous day, then Jim never gets a bike. If there are finitely many people at Jane's party, there is one person such that no one is taller than them. The laws of addition are the same whether you are counting gingerbread men and lemon bars, or primes and composite numbers. And this doesn't just apply to principles of mathematical reasoning which we actually accept. We also expect any *unknown* truths about the numbers (as the smallest collection containing 0 and closed under a transitive, antisymmetric relation like successor) to be mirrored by corresponding truths about any other collection of objects which contain some other starter element and are as few as possible while being closed under a transitive, antisymmetric relation (be this a collection of infinitely many rocks, or a collection of some other abstracta like the range of possible strings containing only the letter "A"). Hence, it is plausible that every sentence about numbers is an instance of a generally valid sentence form containing only worlds like "smallest", "collection", "antisymmetric" "finite" etc - and every mathematical truth is a logical truth in this regard.
Finally, if by "logic" you mean #4- ontologically *committal* good reasoning, the answer depends on a deep question in meta-ontology. For, it is well known that standard mathematics can be reduced to set theory, which in turn can be reduced to second order logic. But what are the ontological commitments of second order logic?
People have very different intuitions about whether we should say that there really are objects (call them sets with ur-elements or classes) corresponding to "EX" statements in second order logic. Does the claim that "Some of the people Jane invited to her party admire only each other, so if all and only these people accept, she will have a very smug party" assert the existence of objects called collections? More generally: the quantification over classes in second-order logic ontologically committal? Statements like the one above certainly seem to be meaningful. And, it turns out not to be possible to paraphrase away the mention of something like a set or class, in the sentence above, using only the tools of standard first order logic. This reveals a sense in which we treat reasoning about abstracta like classes (or, equivalently for these purposes, sets with ur-elements), very similarly to ordinary objects in our logical reasoning about them. But is this enough to show that second order logical is ontologically committal (and hence not logic at all, according to meaning #4)?
I propose that the key issue here concerns how closely ontology is tied to inferential role. Both advocates and deniers of abstract objects will agree that many of the same syntactic patterns of inference that are good for sentences containing "donkey" and sentences containing "set". But what exactly does this tell us about ontology? If you think about ontological questions as being questions about what the logical role of an expression in a given language, this tells you something very decisive. On the other hand, if you think about ontology can swing somewhat free of the inferential roles of sentences in languages (so an expression can have an object-like inferential role without naming an object), it's open to you in principle to say that - however similar their logical role- second order quantifiers are not ontologically committal. On this view, claims about sets with ur-elements are just ways to make very sophisticated claims (generally claims that could not otherwise be finitely expressed) "about" the behavior and relationship between ur-elements, and true claims about pure sets (i.e. sets that can be built up just from the empty set) are true in a way that does not involve any particular relationship to any objects, but can illuminate the necessary relationships between different expressions about classes that do have ur-elements. [At the moment I prefer the former view, that quantification in second order logic is ontologically commital, but this is a subtle issue]
Thus, to summarize, it is fully possible to say - even after Godel- that math is the study of "logic" in the sense of generally valid patterns of reasoning. However, if you say this, you must then admit that "logic" is not finitely axiomatizable, and there are logical truths which are not provable from the obvious via obvious steps (indeed, plausibly ones which we can never know about). Note that to make this claim one need not give up on the idea that logical arguments proceed from the obvious via obvious steps. For, if you take this route you can (and probably will want to) distinguish the human practice of giving logical arguments, from the collection of logical truths. You can say: only some of the logical truths seem obvious to us, and only some of the logically-truth-preserving inferences seem obviously compelling to us. We make logical arguments by putting these inferences together to get new results which are also logical truths. But (what Incompleteness shows) is that not all logical truths can be gotten from the ones that we know about. You can even claim that mathematical truths are logical in the further sense of not being ontologically committal, if you allow (contrary to the usual close association between objecthood and logical role) that the set quantifiers in second order logic are not ontologically committal.
1. first order logic
2. fully general principles of good reasoning
3. a collection of fully general principles which a person could in principle learn all of, and apply
4. principles of good reasoning that aren't ontologically committal
5. principles of good reasoning that no sane person could doubt
The sense in which it has been proved that math isn't logic is (to put things as briefly as possible) this: You can't program a computer to spit out all and only the truths of number theory.
This fact directly tells us that the mathematical truths are not all logical truths, if we understand "logic" in sense #1 - since we *can* program a computer to list off all the truths of first order logic. And it also tells us that the mathematical truths aren't all logical truths in sense #3 or #5 either - if we are willing to make the plausible assumption that human reasoning can be well modeled in this respect by some computer program. For if all human reasoning can be captured by a program, then so can all human reasoning from some starting finite collection of humanly applicable principles, and so can the portion of human reasoning that no sane person could doubt (to the extent that this is well defined).
However, if by "logic" you just mean #2 -fully general principles of reasoning that would be generally valid (whether or not one could pack all of these principles into some finite human brian)- then we have no reason to think that math isn't logic. We expect the kinds of logical and inductive reasoning we use in number theory (e.g. mathematical induction) to work for other things (especially for things like time, which we take to have the same structure as the numbers). If Jim didn't have a bike on day 1, and if, for each subsequent day he could only get a bike if he had already had a bike on the previous day, then Jim never gets a bike. If there are finitely many people at Jane's party, there is one person such that no one is taller than them. The laws of addition are the same whether you are counting gingerbread men and lemon bars, or primes and composite numbers. And this doesn't just apply to principles of mathematical reasoning which we actually accept. We also expect any *unknown* truths about the numbers (as the smallest collection containing 0 and closed under a transitive, antisymmetric relation like successor) to be mirrored by corresponding truths about any other collection of objects which contain some other starter element and are as few as possible while being closed under a transitive, antisymmetric relation (be this a collection of infinitely many rocks, or a collection of some other abstracta like the range of possible strings containing only the letter "A"). Hence, it is plausible that every sentence about numbers is an instance of a generally valid sentence form containing only worlds like "smallest", "collection", "antisymmetric" "finite" etc - and every mathematical truth is a logical truth in this regard.
Finally, if by "logic" you mean #4- ontologically *committal* good reasoning, the answer depends on a deep question in meta-ontology. For, it is well known that standard mathematics can be reduced to set theory, which in turn can be reduced to second order logic. But what are the ontological commitments of second order logic?
People have very different intuitions about whether we should say that there really are objects (call them sets with ur-elements or classes) corresponding to "EX" statements in second order logic. Does the claim that "Some of the people Jane invited to her party admire only each other, so if all and only these people accept, she will have a very smug party" assert the existence of objects called collections? More generally: the quantification over classes in second-order logic ontologically committal? Statements like the one above certainly seem to be meaningful. And, it turns out not to be possible to paraphrase away the mention of something like a set or class, in the sentence above, using only the tools of standard first order logic. This reveals a sense in which we treat reasoning about abstracta like classes (or, equivalently for these purposes, sets with ur-elements), very similarly to ordinary objects in our logical reasoning about them. But is this enough to show that second order logical is ontologically committal (and hence not logic at all, according to meaning #4)?
I propose that the key issue here concerns how closely ontology is tied to inferential role. Both advocates and deniers of abstract objects will agree that many of the same syntactic patterns of inference that are good for sentences containing "donkey" and sentences containing "set". But what exactly does this tell us about ontology? If you think about ontological questions as being questions about what the logical role of an expression in a given language, this tells you something very decisive. On the other hand, if you think about ontology can swing somewhat free of the inferential roles of sentences in languages (so an expression can have an object-like inferential role without naming an object), it's open to you in principle to say that - however similar their logical role- second order quantifiers are not ontologically committal. On this view, claims about sets with ur-elements are just ways to make very sophisticated claims (generally claims that could not otherwise be finitely expressed) "about" the behavior and relationship between ur-elements, and true claims about pure sets (i.e. sets that can be built up just from the empty set) are true in a way that does not involve any particular relationship to any objects, but can illuminate the necessary relationships between different expressions about classes that do have ur-elements. [At the moment I prefer the former view, that quantification in second order logic is ontologically commital, but this is a subtle issue]
Thus, to summarize, it is fully possible to say - even after Godel- that math is the study of "logic" in the sense of generally valid patterns of reasoning. However, if you say this, you must then admit that "logic" is not finitely axiomatizable, and there are logical truths which are not provable from the obvious via obvious steps (indeed, plausibly ones which we can never know about). Note that to make this claim one need not give up on the idea that logical arguments proceed from the obvious via obvious steps. For, if you take this route you can (and probably will want to) distinguish the human practice of giving logical arguments, from the collection of logical truths. You can say: only some of the logical truths seem obvious to us, and only some of the logically-truth-preserving inferences seem obviously compelling to us. We make logical arguments by putting these inferences together to get new results which are also logical truths. But (what Incompleteness shows) is that not all logical truths can be gotten from the ones that we know about. You can even claim that mathematical truths are logical in the further sense of not being ontologically committal, if you allow (contrary to the usual close association between objecthood and logical role) that the set quantifiers in second order logic are not ontologically committal.
Friday, June 18, 2010
Knowledge and Cannonical Mechanisms
In my first epistemology class in college, the prof encouraged us to look for adequate necessary and sufficient conditions for knowledge by making the following (imo appealing) argument. We expect that there's SOME nice relationship between facts about knowledge and descriptive facts not containing the word knowledge, since our brains seem to be able to go, somehow, from descriptions of a scenario (like the Gettier cases) to claims about whether the person in that scenario has knowledge. However, philosophical attempts to find a nice definition of knowledge in other terms seem to have systematically failed. This suggests that there may be a correct and informative definition of knowledge to be found, but this definition is just too long to be an elegant philosophical hypothesis, but not too long to correspond to what the brain actually does when judging these claims.
So here's what I propose that the true definition of knowledge might look like:
We describe messy physical processes by talking about symple mechanisms, and a notion of what these mechanisms tend to do "ceterus paribus". People agree surprizingly much on which mechanisms approximate what (e.g. how to go from facts about swans to claims about the swan lifestyle, how to divide up actual dispositions to behavior into "behaving normally" vs, "something special happening whereby the ceterus aren't paribus"). One thing that can be so approximated is human belief forming. We think about actual human belief formation by saying that it "ceterus paribus" it approximates combination of various belief forming mechanisms (e.g. logical deduction, looking etc). A reliable beleif forming mechanism is one whose ceterus paribus behavior yields true beliefs.
Certain belief forming mechanisms are popular, and remain popular with people even when they undergo lots of reflection. Some of these are cannonical, in the sense that we count them as potential conduits for knowledge. But, if we ever come to believe that some such mechanism is not reliable (jn the sense defined above) we will stop saying that beleifs formed via it count as knowledge. So here's what I think a correct definition of knowledge might look like.
We have, say, 300 cannonical reliable mechanisms for producing knowledge, 200 cannonical reliable mechanisms for raising doubt (100 optional and 100 obligitory), and 200 cannonical reliable mechanisms for assuaging doubt. Call these CRMs. Our definition starts by giving a finite list of all these CRMs.
You know P, if and only if your belief in P was generated by some combination of CRMs for producing knowledge, and you went through CRMs from assuaging doubt corresponding to a) all optional CRMS for doubt raising that you did engage in b) all non-optional CRMs for doubt assuaging that apply to your situation.
Even though this is just a claim about what the form of a correct definition of knowledge would look like, it already has some reasonably testable consequences:
1. That situations where it seems unclear of vague what mechanism best describes a person's behavior (should I think of the student as correctly applying this specific valid inference rule, or fallatiously applying a more general but invalid inference rule?) will also make us feel that it's unclear or vague whether the person in question has knowledge.
2. That we should seem unclear whether to attribute knowledge about when reliable but science fictiony and hence non-cannonized mechanisms are described. For example, most people would say it's OK to take delivarances of the normal 5 senses at face value, without checking them against something else. But what about creatures with a 6th sense that allowed them to reliably read minds, or form true beliefs about arbitrary pi 01 statements of arithmetic (imagine creatures living in a world with the weird physics that allows supertasks, and suppose that they have some gland that has no effect on conscious experience, but whose deliverances reliably check each case). Would they count as knowing if they form beliefs by using these?
So here's what I propose that the true definition of knowledge might look like:
We describe messy physical processes by talking about symple mechanisms, and a notion of what these mechanisms tend to do "ceterus paribus". People agree surprizingly much on which mechanisms approximate what (e.g. how to go from facts about swans to claims about the swan lifestyle, how to divide up actual dispositions to behavior into "behaving normally" vs, "something special happening whereby the ceterus aren't paribus"). One thing that can be so approximated is human belief forming. We think about actual human belief formation by saying that it "ceterus paribus" it approximates combination of various belief forming mechanisms (e.g. logical deduction, looking etc). A reliable beleif forming mechanism is one whose ceterus paribus behavior yields true beliefs.
Certain belief forming mechanisms are popular, and remain popular with people even when they undergo lots of reflection. Some of these are cannonical, in the sense that we count them as potential conduits for knowledge. But, if we ever come to believe that some such mechanism is not reliable (jn the sense defined above) we will stop saying that beleifs formed via it count as knowledge. So here's what I think a correct definition of knowledge might look like.
We have, say, 300 cannonical reliable mechanisms for producing knowledge, 200 cannonical reliable mechanisms for raising doubt (100 optional and 100 obligitory), and 200 cannonical reliable mechanisms for assuaging doubt. Call these CRMs. Our definition starts by giving a finite list of all these CRMs.
You know P, if and only if your belief in P was generated by some combination of CRMs for producing knowledge, and you went through CRMs from assuaging doubt corresponding to a) all optional CRMS for doubt raising that you did engage in b) all non-optional CRMs for doubt assuaging that apply to your situation.
Even though this is just a claim about what the form of a correct definition of knowledge would look like, it already has some reasonably testable consequences:
1. That situations where it seems unclear of vague what mechanism best describes a person's behavior (should I think of the student as correctly applying this specific valid inference rule, or fallatiously applying a more general but invalid inference rule?) will also make us feel that it's unclear or vague whether the person in question has knowledge.
2. That we should seem unclear whether to attribute knowledge about when reliable but science fictiony and hence non-cannonized mechanisms are described. For example, most people would say it's OK to take delivarances of the normal 5 senses at face value, without checking them against something else. But what about creatures with a 6th sense that allowed them to reliably read minds, or form true beliefs about arbitrary pi 01 statements of arithmetic (imagine creatures living in a world with the weird physics that allows supertasks, and suppose that they have some gland that has no effect on conscious experience, but whose deliverances reliably check each case). Would they count as knowing if they form beliefs by using these?
Subscribe to:
Posts (Atom)