[edited after helpful conversation with OE where I realize my original formulation of the worries was very unclear]
I was just looking at this cool MIT grad student's website, and thinking about the project of a) reducing other kinds of reasoning to Baysean inference and b) modeling what the brain does when we reason on other ways in terms of such conditionalization.
This sounds pretty good, but now I want to know:
a) What is a good model for how the brain might do the conditionalization? Specifically: how could it store all the information about the priors? If you picture this conditionalization in terms of a space of possible worlds, with prior probability spread over it like jelly to various depths, it is very hard to imagine how *that* could be translated to something realizable in the brain. It seems like there wouldn't be enough space in the brain to store separate assignments of prior probabilities for each maximally specific description of a state of the world (even assuming that there is a maximum "fineness of grain" to theories which we can consider, so that the number of such descriptions would be finite).
b) How do people square basing everything on Baysian conditionalization with psychological results about people being terrible at dealing with probabilities consciously?
Google turns up some very general results that look relevant but if any of you know something about this topic and can recommend a particular model/explain how it deals with these issues...