Tuesday, December 14, 2010

Cog Sci Question

[edited after helpful conversation with OE where I realize my original formulation of the worries was very unclear]

I was just looking at this cool MIT grad student's website, and thinking about the project of a) reducing other kinds of reasoning to Baysean inference and b) modeling what the brain does when we reason on other ways in terms of such conditionalization.

This sounds pretty good, but now I want to know:

a) What is a good model for how the brain might do the conditionalization? Specifically: how could it store all the information about the priors? If you picture this conditionalization in terms of a space of possible worlds, with prior probability spread over it like jelly to various depths, it is very hard to imagine how *that* could be translated to something realizable in the brain. It seems like there wouldn't be enough space in the brain to store separate assignments of prior probabilities for each maximally specific description of a state of the world (even assuming that there is a maximum "fineness of grain" to theories which we can consider, so that the number of such descriptions would be finite).

b) How do people square basing everything on Baysian conditionalization with psychological results about people being terrible at dealing with probabilities consciously?

Google turns up some very general results that look relevant but if any of you know something about this topic and can recommend a particular model/explain how it deals with these issues...


  1. I know Dan Levine, who does psychology research at UT Arlington, does both neural net modeling and decision research. His stuff might be worth looking at for "how could the brain do that" types of questions.

    Regarding "but aren't we terrible at prob reasoning" there's this dual process idea in cog science where the idea is that we have one quick "gut judgment" process for time constrained decision making but also but capacity for another more deliberative process which may more closely be brought into conformity with normative structures like deductive consistency or probabilistic coherence.

    My own somewhat radical view is that these sorts of normative structures (consistency, coherence) are in an important sense outside of our minds/brains, that they're products of doxastic regimentation through interactive, inter-subjective ratiocination (if that makes any sense).

  2. hey, thanks for the link i will check it out.

    re: "products of inter-subjective ratiocination"

    Do you mean that baysean conditionalization is just an ideal for how to reason about probability...and this ideal emerges from e.g. arguments about what bets to take, the process of various countries setting insurance policies and accidentally dutch booking themselves? Or what? This sounds interesting.

  3. That's right. We're only able to formulate the ideal when we've got a certain amount of mathematical sophistication. Risk taking behaviors (betting) are prior to our conceptualizing them as reflections of probability judgments or degrees of belief or etc. The Dutch book argument tells us why we might want to conform our behaviors to a certain mathematical ideal. I think it gets the cart before the horse to think that we've evolved to store something like a probability function in our heads. The view I'm attracted to is thus a kind of cognitive externalism, not about content though but also about doxastic structure. Well, that's the provocative way to put it anyway.