But the avoidance of simulation might be too deep for that. This quirk could be bypassed, either by prompt engineering or by manually editing out such claims. One reason we can't do a Turing Test is that ChatGPT is programmed specifically not to pass: It readily states that it is a language model. Ray Kurzweil has long said, repeating it recently, that we can expect an AI to pass in 2029. If it used nothing but nanonengineering to convert the Earth to computer chips within minutes, so it could better achieve its goal of calculating digits of π, we might not want to call it intelligence, but then again, we'd be dead.Īlso, because humans are the judges, an AI that can fool the judges with psychological tricks could pass: Even the Eliza of the sixties could do that to some extent. If it communicated only in telegraphic staccato yet was vastly more able than humans to earn billions of dollars a day, to create art admired by humans who don't know who created it, correctly interpret human feeling, we would still consider it intelligent. If it had personality quirks, yet otherwise managed to cover almost all areas of achievement - think of neuroatypicality taken a few steps further - we would call it generally intelligent. It is a sufficient but not necessary test of human-level intelligence: An AI that can pass it can cover any area of human intelligence transmissible in text chat, at a level such that human-level intelligence (the judges) cannot tell the difference.īut it has long been recognized that an AI which is generally human-level or beyond could still fail the Turing Test. All you need is a human judge and test subject, and ChatGPT. The Loebner Prize has measured progress towards the Turing Test since 1990. Please comment with your thoughts on whether or how such a test can be put on. Yet Googling around, it doesn't seem that anyone has put on a full Turing Test. ChatGPT looks like it would pass the Turing Test, the gold standard of benchmarks measuring whether an AI has reached human-level intelligence.
0 Comments
Leave a Reply. |