There HAS to be a great analysis of this, somewhere, but looking the usual places (stanford encyclopedia, first few google hits) turns up nothing. Here's my question:

When does it take for something to be a computer?

The obvious place to start is with Turing machines.

But, Turing machines are abstract objects (e.g. ordered 7-tupples), so being a computer doesn't mean being a Turing machine. However, we can define a function for each turing machine, which takes you from an statement of one state of Turing's imaginary infinite tape machine, to another, in accordance with the rules specified by the imaginary 7 tupple. And the natural thought is, that we should say something is a computer if it bears the same kind of relationship to the Turing-machine-qua-7-tupple, as the imaginary infinite tape machine does.

To be clear there are three things:

-7 tupple (the set that encodes the turing machine program)

-imaginary machine with infinite tape (whose behavior is completely determined by what it finds on its input tape plus the 7 tupple for its program above)

-physical system that counts as a computer.

And my initial proposal is this:

X is a computer running a program corresponding to Turing machine t iff there's some "correspondence" function f which pairs strings describing imaginary infinite tape machine states (or the numbers that code for this string) with (say) english sentences describing physical states of x, in such a way that whenever the computer is in a given state, a it goes into the state which the the turing program t says the infinite machine must go to whenever its in state f(a).

But this has two problems. The easy problem is that we want to allow that something can be a computer while, in some ways, not behaving like the imaginary turing machine: it can have only finite memory, or be disposed to break. Here we could just say something is more of a computer the more perfectly it realizes that, and admit (as is intuitively the case) that the notion is somewhat vague.

A somewhat harder problem is putty. Imagine that putty is extremely sensitive to inputs, so as it oozes along in a complicated way, it never goes back to an identical physical state, and it is always disposed to behave differently, depending on what input it gets (i.e. where you poke it). Putty shouldn't count as a computer, intuitively speaking. But we can define a function from strings describing the total state of the imaginary machine to suitably detailed sentences about the putty which satisfies the above. We just identify two different physical configurations of the putty as counting as "the same" putty state whenever the imaginary machine is supposed to be looping back to the same state, and not otherwise.

To fix this, I suggest that we require human usability. We should say: something only counts as a valid correspondence function, if humans can be trained to read off, based on suitable inspection of the state s of X, what description of the imaginary turing machine f(s) gives (without getting to watch its input, or know the partiular program being run). And, maybe we should also require that humans can easily and reliably cause the computer to go into sufficiently many of the possible input states.

But this doesn't quite solve the problem. For, what about putty with a firm face that stores its input program in a visible way, on the left side, and then just oozes on the right side? So we need to stipulate that people are able to do the above even, without coming to know the antecedently comming to know computer's input in any way.

Thus, the final definition is:

X is a computer to the extent that there's a correspondence function f, going from states of X to numbers coding states of the imaginary turing machine as above, and can be applied by humans without knowledge of the program X is supposed to be running or the input to X.

## No comments:

## Post a Comment