However, his arguments don’t really work for software. Google’s director of engineering Ray Kurzweil at least has a real argument for 2029, based on Moore’s Law-type progress in technological improvement. All the dates before 2029 are in my view just silly. We will achieve AI in 1967 predicted Herbert Simon, or 2000 suggested Alan Turing, or 2014 with the Eugene Goostman program on the weekend, or much later. My view is that they are right, but that passing a genuine Turing Test would nevertheless be a major achievement, sufficient to launch the Technological Singularity – the point when intelligence takes off exponentially in robots. The Turing Test, even as envisaged by Turing, let alone as manipulated by publicity seekers, has limitations.Īs US philosopher John Searle and cognitive scientist Stevan Harnad have already pointed out, anything like human intelligence must be able to engage with the real world (“ symbol grounding”), and the Turing Test doesn’t test for that. what criterion would establish something close to human-level intelligence?. In general, it has not gone unnoticed that programs that mimic those with a limited behavioural repertoire, limited knowledge or understanding, or engage those who are predisposed to accept their authenticity are more likely to fool their audience.īut none of this engages the real issues of artificial intelligence today: Subsequent programs which fooled humans include “ PARRY”, which “pretended” to be a paranoid schizophrenic. Weizenbaum left his program running at a terminal, which she assumed was connected to Weizenbaum himself, remotely. His secretary was fooled into thinking she was communicating with him remotely, as described in his 1976 book Computer Power and Human Reason. ELIZA is still answering questions today.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |