[ExI] ai class at stanford
kellycoinguy at gmail.com
Mon Aug 29 18:59:17 UTC 2011
On Sun, Aug 28, 2011 at 11:54 PM, Adrian Tymes <atymes at gmail.com> wrote:
> Seriously. Just because we do not today know how to do something (and if
> we could explain it in reasonably practical terms, we probably could do it
> today), does not mean we never will, nor that we can not see how to go
> about discovering how. If you want to make such claims, the onus is upon
> you to prove that it is not, in fact, possible.
The best effort I've seen to work out this sort of thing is Physics of
the Impossible: A Scientific Exploration into the World of Phasers,
Force Fields, Teleportation, and Time Travel by Michio Kaku. He
creates a language for talking about different classes of impossible,
and then discusses whether the things we see in science fiction are
class 1,2,or 3 impossibilities. If you want to prove that AI can't
meet human levels of performance, I would suggest starting by figuring
out which class of impossibility it is, and why.
Of course my gut feeling is that it's nowhere near impossible.
More information about the extropy-chat