Our own Ari Schulman recently reviewed Ray Kurzweil’s latest book How to Create a Mind for The American Conservative. Ari’s review challenges both Kurzweil’s ideas and his aspirations, which are, as is quite often the case in transhumanist fantasies, rather base — virtual sex and so on. Here Ari criticizes Kurzweil’s dismissal of human consciousness:
The fact that Kurzweil ignores or even denies the great mystery of consciousness may help explain why his theory has yet to create a mind. In truth, despite the revelatory suggestion of the book’s title, his theory is only a minor variation on ideas that date back decades, to when Kurzweil used them to build text-recognition systems. And while these techniques have produced many remarkable results in specialized artificial-intelligence tasks, they have yet to create generalized intelligence or creativity, much less sentience or first-person awareness.
Perhaps owing to this failure, Kurzweil spends much of the book suggesting that the features of consciousness he cannot explain — the qualities of the senses and the rest of our felt life and their role in deliberate thought and action — are mostly irrelevant to human cognition. Of course, Kurzweil is only the latest in a long line of theorists whose attempts to describe and replicate human cognition have sidelined the role of first-person awareness, subjective motivations, willful action, creativity, and other aspects of how we actually experience our lives and our decisions.
Read the whole thing here.
Another worthy take on Kurzweil’s book can be found in a review by Edward Feser, the fine philosophical duelist (and dualist) who recently caused a stir for his able defense of Thomas Nagel. Feser’s review of Kurzweil appears in the April 2013 issue of the magazine First Things, where it is, alas, behind a paywall for now. He focuses on Kurzweil’s ignorance of the distinction between “phantasms” (which are closely related to senses) and “concepts” (which are more abstract and universal) — a distinction found in Thomist and Aristotelian thinking about thinking. Here is just a very tiny snippet from Feser:
[Kurzweil’s] critics have pointed out that existing AI systems that implement … pattern-recognition in fact succeed only within narrow boundaries. A deeper problem, though, is that nothing in these mechanisms goes beyond the formation of phantasms or images. And while a phantasm can have a certain degree of generality, as Kurzweil’s pattern-recognizers do, they lack the true universality and unambiguous content characteristic of concepts and definitive of genuine thought.
I wonder how Kurzweil’s admirers and defenders would respond to Feser’s critique. And I wonder how far Ari and Feser would be willing to concede that the AI project might get someday, notwithstanding the faulty theoretical arguments sometimes made on its behalf. Feser suggests that, instead of How to Create a Mind, Kurzweil’s book might more appropriately be titled “something like How to (Partially) Simulate a (Subhuman) Mind.” What does that mean, practically speaking? Set aside questions of consciousness and internal states; how good will these machines get at mimicking consciousness, intelligence, humanness?
3 Comments
Comments are closed.
I don't think there's any reason not to take the possibility of human-level AI–one could call it merely human-mimicking AI, without consciousness, if one prefers–quite seriously.
The EU after all just decided to throw a billion euros at a project explicitly aimed at fully simulating a human brain in ten years:
http://en.wikipedia.org/wiki/Human_Brain_Project_(EU)
The important question is, however, not whether we could build AI. That question will be answered either positively or negatively by time alone. Where our input is required is the more important query on "should."
And I say no. I believe that Nicholas Agar has provided the most cogent reasoning so far here; I would be curious what you folks think of his Humanity's End.
The normative question of whether we should proceed with the sort of advanced artificial intelligence that Kurzweil and his comrades hope for is a good and important and complicated question, one that we have addressed repeatedly on this blog and in the pages of The New Atlantis (no one more ably and provocatively than Charles Rubin).
But the question of what is possible is an important one, too. Our ethical and political judgments must be guided by facts, often supplemented by predictions. And those facts and predictions are of course themselves often subjects of dispute. So I'm just curious what friends like Ari Schulman and Edward Feser (and anyone else who wants to chime in) are willing to predict to be the likely limitations of the AI project. (The EU throwing a billion dollars at something doesn't necessarily mean there will be measurable results; I'll spare you any jokes about the Eurozone bailouts.)
Let me offer an example of the sort of thing that I'm talking about. Our friend (and New Atlantis colleague) Stephen Talbott made the following claim more than a decade ago:
"It's an extremely safe bet that in Ray Kurzweil's landmark year of 2030 (when machines are supposed to start leaving human intelligence hopelessly behind), there will be no supercomputer on earth that can be relied upon to deliver two successive and coherent responses in a truly open-ended, creative conversation."
Steve fleshes out, in that essay and many others, what he means by "a truly open-ended, creative conversation." (He means something very different from the sort of creepy conversations John Malkovich and other celebrities have been having with Siri.) Steve has offered what strikes me as a pretty testable prediction: either real conversation, in which a human being can be truly creatively engaged by a machine, will be possible by 2030 or it will not. Is he right or wrong?
I should correct myself. Obviously, you are right that "could" is also very important. (I hadn't really noticed Mr. Rubin's "Machine Morality" essay either.) AI with human capabilities in ten years has very different implications for current action than it being developed in hundreds of years.
In your post you seem to be asking about the practical capabilities of AI as opposed to the internal states. And that to me does seem the most important question. That is probably why I found the last paragraph in Mr. Schulman's review the most interesting.
With the HBP we have a serious scientific project, funded at a billion euros, aimed at creating maybe a Frankenstein. I don't think it will probably succeed in a mere ten years either. Henry Markram seems to be really playing the showman. But can anyone claim that, say, it will fail with 99.9% certainty? And if we are not sure, then "should" becomes not just a philosophical question but also a practical one, regardless of whether we think the chances of success are 1% or 10%. Actually, I have yet to find any critic of these technologies who I think has adequately thought this out.
It is good to see that your blog is alive.