Saturday, November 7, 2009

Produce the Code!

There's a three-way-debate going on between those who want to understand our ability to think in terms of manipulation of intrinsically meaningful items in the head (physical token sentences of a language of thought) vs. merely in terms of connections vs. behaviorists who think it doesn't matter how our brain produces suitable behavior.

Obviously, one would like to know a lot more about the neuroscience of language use. But, so far as I can tell, the philosophical aspects of this debate could be resolved right now, by producing toy blueprints/sample code. Then we look at the code, and consider thought experiments in which the brain actually turns out to work as indicated in the code...

Linguistic Behaviorism vs. Non-behaviorism:

If you think that stuff about how competent linguistic behavior is produced can be relevant to meaning, produce sample code A and B with the same behavioral outputs, such that we would intuitively judge a brain that worked in ways A vs. B would mean different things by the same words. [I think Ned Block has done this with his blockhead]

If you think stuff inside the head also establishes determinacy of reference, contra Quine, produce two pieces of sample code A and B for a program that e.g. outputs "Y"/"N" to the query "Gavagai?", such that we would intuitively say people whose brains worked like A meant rabbit and those that worked like B meant undetached rabbit part.

Language of Thought vs. Mere Conectionism:

If you are a LOT-er who thinks things the brain don't just co-vary with horses, but can actually mean `horse', produce sample code which generates verbal behavior, in response to sensory inputs, in such a way that we would intuitively judge pieces of the memory of a robot running that program to have meanings.

Then, produce sample code that works in a "merely conectionist way" and provide some argument that the brain is more likely to turn out to work in the former way.

[NOTE it does not suffice merely to give a program that derives truth conditions for sentences, unless you also want to posit a friendly homunculus who reads the sentences and works out what proper behavior would be. What your brain ultimately needs to do is produce the correct behavior! So, if you want to compare the efficiency of mere conectionist vs. LOT-like theories of how your brain does what it does, you need to write toy programs that evaluate evidence for snow being white, rocks being white, sand being white and respond appropriately- not just the trivial program that prints out an infinite list of sentences. "Snow is white" is true iff Snow is white. "Sand is white" is true iff sand is white... ]

Charitably, I think the LOT-ers want to say that the only feasible way of making something that passes the Turing test will be to use data structures of a certain kind. But until they can show some samples of what data structures would and wouldn't count, it's really hard to understand this claim. (I mean, the claim is that you will need data structures whose tokens count as being about something. But which are these?).

No comments:

Post a Comment