Sunday, April 1, 2018

The Access Problem for Holes

[Hello to anyone out there!

Project: exposure therapy for my fear of posting sloppy/confusing stuff to this blog continues. So uh... sorry. I hope to return to regularly posting polished content soon.]

One of the least popular aspects of my philosophy of mathematics  is what I call Moderate Quantifier Variance. So I thought I'd explain one reason why I'm such a big fan of this view (and why I think you should be too) which has nothing to do with philosophy of mathematics.

The Access Problem for Holes

Consider our knowledge of holes. As discussed in D and S Lewis' On Holes, it's appealing take holes to be distinct from things like the air inside them, portions of the matter that `hosts' (e.g. portions of cheese with some diameter aronud a hole in Swiss cheese).

We seem to know how deeply a piece of cheese must be indented in order for there to count as being a hole in that piece of cheese. But (to a crazy philosopher, for a second) this knowledge can seem rather mysterious. For our senses tell us how the cheese is distributed through space. But how do we know where to draw the line: re how shallow a hole can be? No sensory experience seems to point out a single metaphysically special place to draw the line. So how can one explain the match between human beliefs about how shallow a hole can be and objective matters of facts?

It's appealing to say: there's no mystery about human accuracy here because if we had used `hole' differently (by taking the minimum hole angle to be larger or smaller) then the meanings of our words would have been different so these alternative hole-identifying practices would have yeilded true utterances. But since variant practices of "hole" individuation can require change in sentences whcih don't even use the word hole  (e.g. a purely logical sentence like the Fregean paraphrase for `there are >3000 things' can go from true to false). So it seems plausible that allowing such changes in meaning requires allowing variant meanings for logical vocabulary like ``there is'' as well as in the meaning of  ``hole''.

So I think a great way to solve this `access problem for holes'' is to accept

Moderate Quantifier Variance: there are multiple existential quantifier like meanings which the words `there is' can take on in different ideolects, (say English_1910, English_2010 etc.), though within any particular ideolect `there is' is univocal.

  • These meanings are `quantifier like' in the sense that they obey the all instances of standard first order logical inferential rules/axioms for `exists' (within the language they belong to). 
  • If there is a single maximally natural quantifier sense (as e.g. Sider thinks there is corresponding to our talk about `what there is' takes on when doing ontology), these variant quantifier senses need not be mere quantifier restrictions of this fundamental sense. 
I like to use Moderate Quantifier Variance to explain/vindicate the freedom mathematicians take themselves to have to introduce new logically coherent structures for study, by saying that when mathematicians consistently extend the axioms of pure mathematics by adding axioms describing some new mathematical structures like the complex numbers, their words can both give meaning to expressions like `complex number' and also change the meaning of `there is', so that these axioms express truths.

But whatever you say about math, I think moderate quantifier variance is already useful for understanding our knowledge of swiss cheese!

Tuesday, October 31, 2017

How Not to Title your Dissertation

So I just had to look up my official dissertation title on my grad school's website and....well, here's a little piece of advice I would give to my past self. (The 'do' example comes from the excellent Johann Frick)
MoH&H title

"'Making People Happy, Not Making Happy People': A Defense of the Asymmetry Intuition in Population Ethics"  <-- conveys the core of the view being advocated, what's controversial about it, and why it's appealing, in a punchy and fairly concise way


"The Marriage of Rationalism and Empiricism: A New Approach to the Access Problem." <-- draws attention to question of whether dissertation does anything new in a defensive "protesting too much" way that invites a negative answer, compares self to William Blake and/or God.

Sunday, October 29, 2017

Access to reference magnets: a bitter pill you’ve already swallowed?

[This post proposes a defense of a famous defense of naive scientific realism against a famous antirealist challenge.  So, sorry that I'll have to speed through certain classics to get to the action in a timely fashion (and I'll try to add explanatory links later). 

Also, Ted Sider may have scooped me re: this proposal in Writing the Book of the World section 3.2 (I go back and forth about how to interpret him). But, regardless of priority, I'm shouting about it on the internet now because I'd like to see more uptake.]

Hillary Putnam raised a model theoretic challenge to the commonplace realist idea that `even in the ideal limit of scientific investigation' certain aspects of our best theory of the world could be wrong. David Lewis responded by invoking "reference magnets" (i.e., intrinsically eligible concepts/joints in nature) as a response to this challenge. The idea is that some concepts are more intrinsically eligible candidates for the meaning of words than others. So, it can be correct to interpret someone as meaining plus vs Kripke's quus, or electron vs. electron-that-an-ideal-observer-starting-on-earth-could-discover, even if this involves attributing them slightly more false beliefs.

Swinging back on behalf of Putnam, Tim Button and Jared Warren press a kind of access problem for fans of reference magnets. Suppose that there really are intrinsically eligible joints in nature as Lewis argues. How could creatures like us have come to recognize where these joints are, well enough to know when there is a single reference magnetic joint for our use of some word to defer to?

I think this challenge is worth taking seriously and may point out a bitter pill* which the realist/friend of reference magnetism must swallow. But I want to suggest that this bitter pill may already be part of the larger bitter pill nearly everyone has already swallowed in taking our intuitions about how to do scientific induction at face value. That is, accepting reference magnets doesn't land us with any more of an access problem than we already face in rejecting Humean skepticism about induction.

To see what I mean, consider Nelson Goodman's picture of scientific induction. When we do scientific induction, we don't treat all concepts equally. We currently take some predicates (and relations and functions etc) to be more projectable than others, e.g., green v.s grue. And (Goodman notes) we do a kind of induction about how to do induction, letting experience and reflection change our beliefs about which predicates are projectable. So (in effect) we dogmatically presume both that certain predicates are more projectable than others, and that that certain ways of letting experience change our beliefs about which predicates are projectable are reliable. And plausibly, taking scientific induction at face value requires doing something like this.

But maybe the friend of reference magnets can say that access to these facts about what's joint carving (in the sense of being specially friendly to induction) is all they need for access to reference magnets. If the reference-magnet-fan's doctrine identifies being an intrinsically eligibile concept in the sense of reference magnetism with being an intrinsically eligible concept for the purposes of scientific induction (as, e.g. Sider does and I essentially want to**), then it seems that accepting this doctrine create any extra intuitive access worries.

*[i.e., Perhaps they must embrace a rather depressing picture of the human condition, on which `justified’ reasoning (to the extent that we have any such thing) involves going along dogmatically assuming that certain methods for detecting intrinsically eligible concepts/reference magnets are tolerably accurate (and then being lucky enough to be right about this)]

**[I think one tiny refinement answering Hawthorne's problem about Europe and the Ural Mountains discussed on pg 39 of WTBOTW is needed, but that this makes no difference to the access problem stuff above. More on this in a later post]

Saturday, October 28, 2017

Trivializing Benacerraf and Monstrous Moonshine

[note: Sorry that this post is a little wordy. I'm trying to get back into blogging, and forcing myself to post stuff I'm not quite happy with is part of that. Also see this paper for a way more detailed and less aggro take on this issue.]

Remember that the access problem (aka the Benacerraf problem) for realists goes something like this: If moral/mathematical/etc realism is true, how can human accuracy and reliability about moral/mathematical facts be anything but a miracle or a mystery?

People like Justin Clarke-Doane and perhaps David Enoch, (call them The Trivializers) have been suggesting that we can answer all legitimate access worries besetting for realists about necessary domains like mathematics morals etc. just by “stapling together” two things to explain why we couldn't have easily been wrong (and providing a similarly trivial explanation for the sensitivity of our I won't discuss here):
  • an  explanation (e.g.historical or evolutionary) for why we reliably believe certain moral/mathematical/etc claim,
  • the fact that these claims are necessary truths
So, for example, a classic platonist might try answer access worries by explaining human reliability about mathematics like so:

TRIV: Mathematicians reliably believe truths because they reliably believe only those mathematical claims which can be proved in a certain formal system (e.g. ZFC) and this formal system is (necessarily) truth preserving w.r.t. the platonic mathematical objects.

Such an explanation is not likely to satisfy anyone who feels an intuitive access worry. For TRIV just explains one intuitively mysterious match between human psychology and objective which intuitively “cries out for explanation” (our acceptance of theorems that match whats going on in Plato's heaven) by positing another such mysterious match (our acceptance of axioms that match what's going on in Plato's heaven).

But it does (in some sense) suffice to explain the safety of human beliefs by deriving our reliability about realist facts concerning the relevant domain (i.e., that in all close possible worlds, our beliefs about these domains match up with the truth) from more general premises which the realist (if not their deflationary opponents) accepts.

And I take Trivializers to be suggesting that our intuitions that certain regularities involving necessary truths "cry out for" explanation in a deeper/more unified sense, which mere deductions of reliability/safety/sensitivity like TRIV need not provide, is an illusion.

There are two reasons why I don't buy this.

First, conceptually analyzing anything (from tablehood to justice) is infamously hard, but there are plenty of good paradigms for how to think about ”crying out for explanation” which seem to apply equally well to necessary and contingent regularities (e.g., norms that say we should preferring theories that have fewer degrees of freedom and or Kitcher’s idea of scientific explanation as unification).

Much more importantly though, embracing the Trivializers' position seems to have deeply implausible revisionary consequences for mathematical practice, since mathematicians sure seem to think that some regularities involving necessary truth truths can cry out for (non-trivial) explanation.

For example, consider this quote from a popularizing article about the history of John Conway's `Monstrous Moonshine' conjecture.

``Strangely enough, [the j- function]’s first important coefficient is 196,884, which McKay instantly recognized as the sum of the monster’s first two special dimensions.
Most mathematicians dismissed the finding as a fluke, since there was no reason to expect the monster and the j-function to be even remotely related. However, the connection caught the attention of John Thompson, a Fields medalist now at the University of Florida in Gainesville, who made an additional discovery. The j-function’s second coefficient, 21,493,760, is the sum of the first three special dimensions of the monster: 1 + 196,883 + 21,296,876. It seemed as if the j-function was somehow controlling the structure of the elusive monster group.'’’

Note that mathematicians already had separate proofs of facts about the j-function and monster group (and hence an `explanation' of for the match between these facts of the kind which TRIV provides, i.e., a deduction, from more general premises, of the fact that this relationship holds in all close possible worlds). But (once this match between apparently unrelated domains proved striking enough) they expected to find some further/deeper unifying explanation and this expectation guided choices for further research.

So I'm prima facie pretty suspicious of the idea that felt intuitive demands for unifying/satisfying/non-trivial explanations of regularities involving necessary truths are generally misguided. But maybe I'm not being charitable to Clarke Doane and/or he can find some way of separating the "this regularity cries out for further explanation" intuitions he wants to dismiss as unreliable from those which obviously do good work in mathematics.

Thursday, June 30, 2016

Posthumus Vindication and Newton's Concept of the Derivative

In a recent Mind paper, `Incomplete Understanding of Concepts: the Case of the Derivative', Sheldon Smith vividly sets up some classic questions about Newton's concept of the derivative, and how later mathematical work can be seen as vindicating Newton.

However I'm not entirely convinced by Smith's answers to these questions.

Historical Background:

[Smith tells us how] Newton and Leibnitz had certain limited beliefs about the derivative
  • that it was "the local rate of change of a function given by the slope of the tangent" so the derivative of x^2 kinda should be 2x
  • that it was the limit as i goes to 0 of (f(x+i)-f(x))/i, hence derivative of x^2 was [(x+i)^2-x^2]/i which they thought was =(2xi+i^2)/i=2x+i=2x
but they did not have a very solid justification for the later reasoning (particularly the presumption that one can divide by i in the claim above).

Since then, mathematicians have defined multiple derivative-like notions which all let one defend reasoning like the above more rigorously, but don't always agree:
  • the usual: the derivative of f(x) is the function f'(x) such that for every epsilon there is an i such that |(f(x+i)-f(x))/i - f'(x)| < epsilon
  • the symmetric derivative: [like the normal definition but with (f(x+i)-f(x-i))/2i in place of f(x+i)-f(x))/i] (note that when f(x)= |x|, the symmetric derivative is 0 whereas the standard definition is undefined).
  • a definition using infinitesimals
  • a definition which also can apply to generalized functions like the Dirac delta function
Furthermore there is a common intuition that, in providing some of the definitions above and proving things with them, mathematicians like Weierstrass "justified [Newton's and Leibnitz's] thoughts" and that Newton and Leibnitz would have felt "vindicated" by subsequent developments of the derivative.

The questions:

Now, Smith argues that Newton didn't seem to be using any particular one of these modern concepts of the derivative.
  • Newton didn't (somehow) implicitly have any of these precise concepts in mind, and which definition of limit he would have preferred to adopt (if he had been told about all of them) might vary with which one he found out about first.
  •  There's no single "best sharpening" of what Newton believed/had in mind which must be accepted in limit of ideal science. We just have separate notions of derivative, each of which is mathematically legitimate. Thus we can't say that Newton meant, say, the standard contemporary notion of the derivative because he was conceptually deferring to the results of ideal science.
So he asks:
  1. How `` should [one] think about the derivative concepts with which Newton and Leibniz thought''? 
  2.  How ``could [Weierstrass] have managed to justify their thoughts even if their thoughts did not involve the same derivative concept as Weierstrass’s''?

Smith's Answers:
I take Smith's answers to the above questions to be as follows:

Q1: What was Newton's concept of the derivative [specifically, how does it effect the truth conditions for sentences]?

A: Newton's concept of the derivative (call it derivative_N) "only has a definite referent" in cases where all acceptable sharpening definitions of his concept agree.  So, for example, if the symmetric derivative and the standard derivative were both acceptable sharpenings, then expressions like `the derivative_N of f(x)=|x|' would fail to refer [or, perhaps, would refer to function which is undefined at 0 so that 'the derivative_N of f at 0' would fail to refer].

Q2: How was Weierstrass able to vindicate Newton, given that his concept of the derivative was different from Newton's?

A: One can vindicate Newton by justifying particular claims Newton made (e.g., about the derivative of x^2). And one can do this giving a proof of the corresponding claim employing Weierstrass's definition, if it also happens to be the case that all other permissible sharpenings of Newton's notion of the derivative would agree on this claim.

A Small Objection: 

I'm not entirely convinced by Smith's account of Newton's concept (Q1) for various reasons. But even if Smith is right about Q1, I think his answer to the vindication question (Q2) is fairly unsatisfying.

For suppose (as Smith seems to presume) Weierstrass vindicated Newton by showing the truth of particular claims he made about calculous -- that, say, what he expressed by saying ``the derivative of x^2 is 2x'' was true. If (as Smith's account of Newton's concept seems to tell us) the truth of this claim requires that all acceptable precifications agree in making ``the derivative of x^2 is 2x'' come out true, how can one adequately justify Newton's claim merely by discovering *one* such precificiation and showing that *it* makes the above sentence come out true?

A fix?

Maybe Smith could solve this problem (while keeping his account of the concept and the, IMO, good idea that vindicating Newton doesn't require assessing all possible derivative-like notions) as follows.

Say that "vindicating Newton's thought"  in the sense we normally care about (the in the sense that seems to have happened, and that, plausibly, Newton and Weierstrass would have cared about) doesn't require showing that some of Newton's specific mathematical utterances expressed truths. Instead, one can do it just  by showing Newton was right to believe some more holistic meta claim like `There is some mathematical notion which makes [insert big collection of collection of core calculous claims and inference methods] all come out true/reliable'.

Thursday, April 21, 2016

Three Projects Involving Dispensing With Mathematical Objects

One of the many unfortunate things about academic fashions is that when a popular project goes out of fashion, superficially similar-looking projects which don't face the same difficulties can be tarred with the same brush.

Many people (myself included) feel cautious pessimism about formulating a satisfying nominalist paraphrase of contemporary scientific theories [one major issue is how to make formulate something like probability claims without invoking abstract-seeming events or propositions]. But we wouldn't want to over generalize.

 In this post I'm going to suggest three different motives for seeking to systematically paraphrase our best scientific theories (and many other true and false ordinary claims) in a way that dispenses with quantification over mathematical objects, and note that the requirements for success in the first (ex-fashionable) project are notably laxer in some ways than the requirements for the others.

[Note: I don't mean to endorse any of the projects below, but I think that 3 and Burali-Forti based versions of 2 are at least interesting.]

Three Motivations for Paraphrasing Mathematical Objects Out of Physics
  1. General rejection of abstracta: You deny the existence of mathematical objects because you think allowing any abstract objects are bad. (this is the classic motive)
  2. Explaining special features of mathematical practice by rejection of mathematical objects: You deny the existence of mathematical objects because you think that not taking mathematical existence claims at face value is allows for the best account of certain special features of pure mathematical practice, (e.g., by mathematicians’ apparent freedom to choose what objects to talk in terms of/disinterest in mathematical questions that don’t effect interpretability strength, or by the Burali-Forti paradoxes in higher set theory) not to take apparent quantification over mathematical objects at face value. 
  3. Grounding math in logic/bringing out a claimed special relationship between math and logic: You may allow the existence of mathematical objects, but you’re moved by the close relationship between an intuitive modal notion of coherence/semantic consistency/logical possibility and pure mathematics to seek some kind of shared grounding and think that the coherence/logical possibility notion looks to be the more fundamental. As a result, it seems promising to seek a kind of "factoring" story, which systematically grounds all pure mathematics in facts about logical possibility, and all applied mathematics in some combo of logical possibility and intuitively non-mathematical facts.

 Distinguishing these motivations/projects matters, because what you are trying to do influences what ingredients are kosher for use in your paraphrases: 

If you have the first motivation (defending general nominalism), you need to avoid quantifying over any other abstracta, including: platonic objects called masses or lengths which don’t come with any automatic relationship to numbers, corporations, marriages, sentences in natural languages, metaphysically possible worlds, and (perhaps) propensities. 

But if you have the second motivation (defending mathematical-nominalism-adopted-to-explain-special-features-of-mathematical-practice), then quantification over abstracta which don’t have the relevant special feature, (e.g., objects in domains where we don't think appear to have massive freedom to choose what objects to talk in terms of or objects which don't give rise to a version of the burali-forti paradox), is fine. 

And if you have the third motivation, then quantification over any objects which aren’t intuitively mathematical-- or aren't mathematical in whatever way you are claiming requires a special relationship to coherence/semantic consistency/logical possibility -- is OK.

Tuesday, April 19, 2016

Hello World (again)!

As you can see, I haven't posted to this blog for ages.

I've been busy a) enduring the horrors of the job market b) getting a sweet 5 year postdoc (I still can't express how luck and grateful I feel) c) finishing a stack of old papers and d) writing a zillion page monograph to answer a minor technical question about Potentialism and logical possibility which my advisers asked in grad school (and then rewriting all the proofs 3+ times because a grumpy mathematician friend didn't think the prose was clear or concise enough!).

But now that I have time to focus on new research, I'm thinking it might be fun start blogging again. I'm certainly touched by the number of lurkers who still turn up to check this blog out.

Let's see how things go!