"Human", Selmer Bringsjord explains, "is generally a term of biology." Questions such as "What does it mean to be human?" may be important to biomedical engineers, but they don't matter much to artificial intelligence (AI). "Any AI at all is a form of a zombie," devoid of consciousness and emotion, explains the chair of the Department of Cognitive Science at Rensselaer Polytechnic Institute (RPI). "The best you can do", Bringsjord adds, is to convince people that an AI is a human. That's the very goal of the Turing Test, a final exam of sorts which Selmer Bringsjord hopes to pass as part of a project with IBM called "Engineering Cognitively Robust Synthetic Characters."
But is the Turing Test a true test of a machine's "ability" to demonstrate intelligence?
One of the criticisms of this holy-grail of artificial intelligence is that if a machine can solve a problem that no human being could solve, it would – in principle – fail the Turing test. Alan Mathison Turing, the English logician who devised the test which bears his name, admitted as much. There are limits to the rate-of-speed calculations that humans can perform on-the-fly, and computers win these calculation races hands-down. But what would be really impressive, says Bringsjord, is if the machine took longer than was really necessary and appeared to be "humbled". After all, the speed of numerical calculation "is not at the heart of human intelligence".
Of Men and Machines
The history of science is filled with great men who loved their creations too much, and saw the dangers in them only too late. Alfred Nobel invented dynamite and blasting caps, but was shamed into awarding the prize which bears his name after a French newspaper labeled him "the merchant of death". Albert Einstein urged President Franklin D. Roosevelt to secure a supply of uranium ore for the United States, but later characterized his letter to the White House as his "greatest mistake". Fortunately, Selmer Bringsjord recognizes the dangers posed by artificial intelligence (AI). When asked if his work could lead humanity down a path which ends in the science fiction world of The Terminator, Bringsjord admitted that "it is a step in that direction".
In George Orwell's book 1984, the citizens of mythical super-state called Oceania are subjected to constant surveillance by an omnipotent figure called Big Brother. Although Orwell's dystopia did not come to pass, real-life moviegoers may remember another 1984 sci-fi event. In The Terminator, a computer system named Skynet dispatches a human-looking cyborg to kill John Connor, leader of the resistance against machines which have nearly destroyed all mankind. CR4 won't ruin the movie's plot, but it's interesting to note that the nuclear war which precipitated the rise of the machines occurred in 1997. Ten years later, in December of 2007, a real-life Department of Defense (DoD) report described the U.S. military's plans to develop a myriad of unmanned systems over the next 25 years.
A Dark Future Is Pretty Simple
As Selmer Bringsjord explains, "a dark future is pretty simple" if humanity keeps "giving destructive capabilities" to unmanned aerial vehicles (UAVs) such as the ones now used in Iraq and Afghanistan. Professor Bringsjord worries about a future where machines are "incentivized to kill", and told CR4's frankd20 and Moose that humanity needs "a form of mechanized ethics" before it's too late. Professor Bringsjord has written extensively on this subject, including an article for IEEE Intelligent Systems called "Towards a General Logicist Methodology for Engineering Ethically Correct Robots". In this article, Bringsjord argues these very points.
So what else did Selmer Bringsjord tell CR4 about logic, ethics, and artificial intelligence? Find out tomorrow in Part 4, the last article in this series.
Editor's Note: Part 1 and Part 2 of this interview ran last week. Part 4 is now on-line, too.
Steve Melito - The Y Files
Add to Technorati Favorites
|
Comments rated to be Good Answers:
Comments rated to be "almost" Good Answers: