In Oct. of 1950, the science journal Mind published a paper by the brilliant mathematician Alan Turing. Experts consider the Turing test to be a behavioral test for consciousness.[1] His paper remains relevant to modern thinking about whether a computer running sophisticated AI (artificial intelligence) software can think. In that paper, Turing wrote:
I PROPOSE to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ or ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:
C: Will X please tell me the length of his or her hair? Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be
‘My hair is shingled, and the longest strands are about nine inches long.’
We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’
From what I can tell of Turing's paper, it looks to me like it was one of the sources of philosopher John Searle's 1980 thought experiment called the Chinese Room experiment. That experiment led me to think that computers and software cannot think or be sentient.
Maybe in the future, computer technology can come to mimic the workings of the human mind very closely, making it impossible to distinguish a human from a machine. There is research moving in this direction:
A synaptic transistor is an electrical device that can learn in ways similar to a neural synapse. It optimizes its own properties for the functions it has carried out in the past. The device mimics the behavior of the property of neurons called spike-timing-dependent plasticity, or STDP.
But even if computers running AI reach indistinguishability from humans, would that amount to thinking or consciousness? Knowing that would depend on a much better understanding of how humans think or are conscious.
These are encouraging, fascinating days in science. Too bad it's not the same for politics.
Footnote:
1. One expert described it like this in 2017: The best known behavioral test for consciousness is the Turing test, which was put forward by Alan Turing in 1950 as an answer to the question “Can machines think?” Instead of defining what he meant by “machines” and “think,” he chose to limit the machines to digital computers and operationalized thinking as the ability to answer questions in a particular context well enough that the interrogator could not reliably discriminate between the answers given by a computer and a human (via teleprinter) after 5 min of questioning.
---------------------------------------------------------
---------------------------------------------------------
ChatGPT-written books are flooding Amazon as people turn to AI for quick publishing
- There were over 200 e-books in Amazon’s Kindle store as of mid-February listing ChatGPT as an author, but there is no requirement to disclose the use of AI
- Some worry that without more transparency, the technology could put a lot of authors out of work by flooding the market with low-quality books
Good 'ole AI, it's making our lives better faster. Or maybe not. Authors can claim they wrote what AI wrote. Is that copyright infringement, or just hooliganism?
---------------------------------------------------------
---------------------------------------------------------
China's Mars rover my have gone kaput. Bummer. China isn't spilling the beans.
NASA Images Confirm China's Mars Rover Hasn't Moved in Months
No comments:
Post a Comment