If you're in the tech industry, or you're just generally interested in technology, chances are you've heard of the Turing test. It's usually billed as a litmus test for intelligence in machines. The idea, in its most basic form, is simple. A judge converses with two subjects, one of which is human and one of which is a machine, via some sort of mechanism that hides the physical characteristics of the subjects. If the judge cannot tell the human from the machine, we say that the machine is intelligent. Simple as that.
The Turing test has been a major source of controversy in Philosophy of Mind circles, but the irony is that Turing intended it to be a way of sidestepping debate. In his view, asking whether a machine could think was about as useful as asking whether a submarine could swim; the answer, of course, completely depends on what you mean by "machine" and "think". These are questions with very subjective answers, so instead of going down that particular rabbit hole, he came up with his test. He thought it was a reasonable one for a very good reason: we use it to judge the intelligence of other people every single day. One judges the intelligence of another person based on conversations with said person. There's no other way to do it; you don't get to peer into their brains to watch the gears move. Why shouldn't machines be subject to the same treatment?Read more