In my first epistemology class in college, the prof encouraged us to look for adequate necessary and sufficient conditions for knowledge by making the following (imo appealing) argument. We expect that there's SOME nice relationship between facts about knowledge and descriptive facts not containing the word knowledge, since our brains seem to be able to go, somehow, from descriptions of a scenario (like the Gettier cases) to claims about whether the person in that scenario has knowledge. However, philosophical attempts to find a nice definition of knowledge in other terms seem to have systematically failed. This suggests that there may be a correct and informative definition of knowledge to be found, but this definition is just too long to be an elegant philosophical hypothesis, but not too long to correspond to what the brain actually does when judging these claims.
So here's what I propose that the true definition of knowledge might look like:
We describe messy physical processes by talking about symple mechanisms, and a notion of what these mechanisms tend to do "ceterus paribus". People agree surprizingly much on which mechanisms approximate what (e.g. how to go from facts about swans to claims about the swan lifestyle, how to divide up actual dispositions to behavior into "behaving normally" vs, "something special happening whereby the ceterus aren't paribus"). One thing that can be so approximated is human belief forming. We think about actual human belief formation by saying that it "ceterus paribus" it approximates combination of various belief forming mechanisms (e.g. logical deduction, looking etc). A reliable beleif forming mechanism is one whose ceterus paribus behavior yields true beliefs.
Certain belief forming mechanisms are popular, and remain popular with people even when they undergo lots of reflection. Some of these are cannonical, in the sense that we count them as potential conduits for knowledge. But, if we ever come to believe that some such mechanism is not reliable (jn the sense defined above) we will stop saying that beleifs formed via it count as knowledge. So here's what I think a correct definition of knowledge might look like.
We have, say, 300 cannonical reliable mechanisms for producing knowledge, 200 cannonical reliable mechanisms for raising doubt (100 optional and 100 obligitory), and 200 cannonical reliable mechanisms for assuaging doubt. Call these CRMs. Our definition starts by giving a finite list of all these CRMs.
You know P, if and only if your belief in P was generated by some combination of CRMs for producing knowledge, and you went through CRMs from assuaging doubt corresponding to a) all optional CRMS for doubt raising that you did engage in b) all non-optional CRMs for doubt assuaging that apply to your situation.
Even though this is just a claim about what the form of a correct definition of knowledge would look like, it already has some reasonably testable consequences:
1. That situations where it seems unclear of vague what mechanism best describes a person's behavior (should I think of the student as correctly applying this specific valid inference rule, or fallatiously applying a more general but invalid inference rule?) will also make us feel that it's unclear or vague whether the person in question has knowledge.
2. That we should seem unclear whether to attribute knowledge about when reliable but science fictiony and hence non-cannonized mechanisms are described. For example, most people would say it's OK to take delivarances of the normal 5 senses at face value, without checking them against something else. But what about creatures with a 6th sense that allowed them to reliably read minds, or form true beliefs about arbitrary pi 01 statements of arithmetic (imagine creatures living in a world with the weird physics that allows supertasks, and suppose that they have some gland that has no effect on conscious experience, but whose deliverances reliably check each case). Would they count as knowing if they form beliefs by using these?
No comments:
Post a Comment