If you haven’t read about it yet, “Eugene Goostman” is a chatbot that’s being heavily promoted by the University of Reading’s Kevin Warwick, for fooling 33% of judges in a recent Turing Test competition into thinking it was human, and thereby supposedly becoming “the first program to pass the Turing Test” as Turing defined it in his 1950 paper. In reality, while Turing did venture a prediction involving AIs fooling humans 30% of the time by the year 2000, he never set such a numerical milestone as the condition for “passing his test.” Much more importantly, Turing’s famous example dialogue, involving Mr. :-))) By the way, I still don’t know your specialty – or, possibly, I’ve missed it? Eugene: Just two, but Chernobyl mutants may have them up to five. Scott: No, I need to know that you’re not a chatbot. :-))) Oh, what a fruitful conversation;-) Scott: Do you understand why I’m asking such basic questions? Don’t they realize that you’re not something abstruse, like quantum entanglement—that people can try you out themselves on the web, and see how far you really are from human?
Pickwick and Christmas, clearly shows that the kind of conversation Turing had in mind was at a vastly higher level than what any chatbot, including Goostman, has ever been able to achieve. Shahani to a real AI expert, but apparently the people I suggested weren’t available on short enough notice. Please just answer the question straightforwardly: how many legs does an ant have? Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”? Eugene: Ask the Creator about it (call Him by phone, for example:-).
But OK, just in case anyone doubts my humanity, here’s my answer to Eugene, together with his response: Me: Not that I’m the one being tested, but I’m a theoretical computer scientist. He’s under no delusions whatsoever about his fun and entertaining creation standing any chance against a serious interrogator.
He comments: “Conditions of the contest made it simpler …
This question defines the scope of what machines will be able to do in the future and guides the direction of AI research.
Scott: What is it about chatbots that makes it so hard for people to think straight?
Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them?
Turing notes that no one (except philosophers) ever asks the question "can people think? research defines intelligence in terms of intelligent agents.
" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". An "agent" is something which perceives and acts in an environment.