Quote:
Originally Posted by Matadora
|
Quoting Bill Nye from the link:
“I’m skeptical, especially about these extraordinary timelines — 2029? [To pass the Turing Test]
What is that, 12 years from now? No! No.” Nye said. “I’m not concerned, because humans make the machines. Sooner or later, to put it in old terms, somebody’s got to shovel the coal to make the electricity run the machine.”
I think that passing the Turing Test by 2029 is not implausible, but it seems to me that progress toward AI has been slower than expected over the past couple of decades, so I can also see a plausible reason to think it might take longer. There are some critical unknowns at this point. One, in my view, is the relationship between intelligence and qualia (i.e., the
feelings or there being "something it is like" to do the thinking).
If non-qualia-experiencing machines can beat a rigorous form of the Turning Test, then I'd say 2029 is a very reasonable prediction. But that "if" harbors an important assumption that I think some technophiles are not sufficiently considering. It could be that qualia are the key to the sort of ultra-contextualized thinking that human do.
Suppose you see a butterfly in your yard this morning. Immediately you "know" a FAPP infinite number of facts without even thinking about them (i.e., without consciously knowing that you know them). Just a few examples: You know that there was a butterfly between the houses at two different addresses (yours and your neighbors). You know there was a yellow object in your yard that was probably not there last January. You know that it probably wasn't there last February either. And so on, ad infinitum. And given this vast knowledge, you can almost instantly become consciously aware of all of these facts (seemingly) without having to do extensive memory searches or complex logic strings, etc. Maybe there are perfectly mechanical ways in which the brain does this (perhaps some combination of neural net processing and quantum computing tricks?) and these mechanical tricks can be applied to quantum computers by 2029, but on the other hand, it is possible that qualia are not strictly mechanical and possible that qualia are necessary for realistically handling human-like intelligence. In that case, the 2029 prediction would probably be overly optimistic.
My guess: Qualia are not strictly necessary for passing a standard Turing Test and machines probably will pass the TT by 2029. But qualia probably are necessary for more genuinely
human-like intelligence (I'd say they play a causal role in our behaviors) and thus machine's probably won't pass a really deep version of the TT unless they experience qualia. (E.g., A human-like embodied android could pass a 10-day version of the TT if he set that as his goal, but if you were to marry him without realizing that he is a machine and live with him for decades, you might still conclude (on the basis of behavior alone) that he is a machine, despite his bests efforts to maintain the TT trick over the decades). This is assuming, of course, that he
doesn't experience qualia. But, then again, maybe any machine that is complex enough to pass the TT just naturally starts to experience qualia. That's the critical wildcard.
Without an adequate theory of the relationship between mechanical laws and qualia, we are really just guessing. Still, if I could lay down some investment cash, I'd bet on passing the TT by 2029. I'm less confident about the singularity by 2045, but I would probably still bet on it, if I had to place a bet. We are already well on our way to integrating biology and machines. Barring some nuclear catastrophe, world-wide anti-tech political movement, zombie apocalypse, etc., I expect something essentially "singularity-like" to happen by 2060. But, really, does it matter if it is 2045 or 2060 or 2160 or 2260?
I'm curious to know: Does anyone in this thread think that machines will probably
never surpass human intelligence?