Monday, July 25, 2011

I chat with Ray Kurzweil about replicating Ray

As I chat with Ray Kurzweil after a private screening of Transcendent Man, I'm equally awestruck by his genius and his certitude.

As you know, Ray's primary message is that computers will soon be much smarter than humans. In fact, orders of magnitude smarter. Period. End of discussion.

No hedging on definitions of "intelligence". Nor any hedging on timing. It will easily happen in the next 40 years, most of it in the next 20-30 years.

Bam, next question, my friend?

I feel aligned with much of Ray's logic and conviction. Compared to puny human brains, computers have:
1. More memory
2. Better memory
3. Faster processors
4. Faster networking capability
5. Better power supplies (our biology)
6. Less noise (emotional, chemical, etc)

I know the long "last mile" for AI is judgment. I know AI is moving very quickly on many dimensions, but slowly on others. As we chat, I start to wonder what would be the last bastions of "human-ness"? I imagine my computer telling me, "I told you so!" I hear Ray's voice....

He is sharing his hope that he will live long enough to "transition" to a perfect digital copy of himself. You read that right, a perfect, thinking, "feeling" digital version of himself.

As Ray and I chat, my feelings of cynicism rise less than I expect. His confident, warm manner invites dialogue.

I check myself for perhaps thinking too small and I try to suspend my disbelief, as if I were still watching a movie. Then I quickly decide it's time to get in the game.

I ask Ray, "In your future world view, couldn't the "real" you be infinitely replicated? What are your thoughts on version control and on counterfeit Rays?"

I get a thoughtful, optimistic response.

That's the very best part about Ray-- his optimism. It cuts through cynicism like a knife and drives both his genius and his audacious certitude.

As you may know, many computer scientists agree with Ray that computers will eventually become more intelligent than humans and Marvin Minsky's 1982 essay is still a fun read: "Why People Think Computers Can't". Minsky addresses the issue of whether computers can be creative and writes, "We shouldn't intimidate ourselves by our admiration of our Beethovens and Einsteins. Instead, we ought to be annoyed by our ignorance of how we get ideas."

AI scientists completely disagree about end game states, along the widest imaginable spectrum between Utopia and Dystopia. While some worry deeply about how AI could run seriously amok (see NYT story), Kurzweil's unflappable optimism makes him the spiritual leader of the Utopian view.

I thanked Ray for the chat via email the next day and received a lovely response, and autographed copies of his two latest books.

4 comments:

  1. i disagree with large parts of ray's analysis, largely because i think he does not have a view in which human evolution affects memory in the near-term (the bruce lipton stuff: http://www.brucelipton.com/spontaneous-evolution-overview). assuming time is cyclical and humans can tap into a collective memory field to see the past and future puts the man vs machine debate in a different light, if one accepts the science arguments advanced by lipton and that group of scientists as valid.

    ReplyDelete
  2. Exciting! I wonder what types of, and how many, new intellectual disciplines are going to sprout up (or further develop) around AI.

    It's interesting to think, too, that today our nation is dealing with the moral dilemmas on concepts like abortion and euthanasia - and it seems that, with Ray's and other's future digital selves, the moral and metaphysical dilemma that would arise is in "manufactured heaven" - but maybe the religious philosophers of thousands of years ago weren't too different than Ray is as simply being futurists :P

    ReplyDelete
  3. hey bob! good movie. i would like to ask ray why he wants to live forever as himself? why not a better version of himself? or even someone else? or even hack together the best of past geniuses and current ones?

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete