Wednesday, November 18, 2009

bookclub: Gareth Evans on Semantics + Tacit Knowledge I

I just discovered Gareth Evans has a neat article (probably a classic) about the very issues of what a semantic theory is supposed to do, which I've been worrying about recently. I found it so interesting that I'll probably write a few posts about different issues in this article.

The article starts out by paraphrasing Crispin Wright, to the following effect:

If philosophers are trying to state what it takes for sentences in English to be true, there's a very simple schema '"S" is true iff S' (this is called Tarski's T schema) which immediately gives correct truth conditions for all English sentences.

But obviously, when philosophers try to give semantic theories they aren't satisfied with just doing this. So what is the task of formal semantics about?

I think this is a great question. When I first read it I thought:

Perhaps what we want to do is notice systematic relationships between the truth conditions for different sentences in English e.g. whenever "it is raining" is true "it is not the case that it is raining" is false. If you want to make this sound fancy, you could call it noticing which syntactic patterns (e.g. sentence A being the result of sticking "it is not the case that" on to the front of sentence B) echo interesting semantic properties (e.g. sentence A having the opposite truth value from sentence B).

However, I would call this endeavor the study of logic, rather than semantics. So far we have logical theories that help us spot patterns in how words like "and" and "there is" (and perhaps "necessarily") effect the truth conditions for sentences they figure in. There may be similar patterns to notice for other words as well (e.g. color attributions - something can be both red and scarlet but not both red and green) and one could develop a logic for each of these.

We aren't saying what "and" means (presumably if we are in a position to even try to give a logic for English expressions we already know that "and" means and), rather we are discovering systematic patterns in the truth conditions for different sentences containing "and".

So, rule one other thing off the list.

Instead, Wright suggests (and Evans seems to allow) that semantics to go beyond trivially stating the truth conditions for English sentences by "figuring in an explanation or the speaker's capacity to understand new sentences". (I am quoting from Evans, but both deplore the vaguness of this statement).

This sounds initially plausible to me, but it raises a question:

Once we have noticed that attributions of meaning don't require anything deeper than the kinds of systematic patterns of interactions with the world displayed by wittgenstein's builders (maybe with some requirement that these interactions be produced by something that doesn't look like ned block's giant look-up table), the question of how human beings actually manage to produces such behavior seems to be a purely scientific question.

There are just neuroscientific facts, about a) how the relevant alterations of behavior (corresponding e.g. to learning the word "slab") are produced when a baby's brain is exposed to a suitable combination of sensory inputs and b) what algorithm most elegantly describes/models this process.

So, what's the deal with philosopher trying to do semantics? And what does it take for an algorithm to model a brain process better or worse? I'll try to get more clear on these questions, and what Evans would say about them, in the next post.

No comments:

Post a Comment