[ExI] ai class at stanford
Adrian Tymes
atymes at gmail.com
Mon Aug 29 05:54:04 UTC 2011
On Sun, Aug 28, 2011 at 9:45 PM, G. Livick <glivick at sbcglobal.net> wrote:
> But this knowledge comes at a price: we are looking behind the
> curtain, learning to do what the guy there does, with the attendant loss of
> technological innocence (or the availability to feign innocence). Any
> claims from graduates that certain things in the world of AI are practical
> and their emergence predictable can expect challenges from other graduates
> as to possible methods. Defense of such concepts as "uploading" will become
> impossible for people tasked with also explaining, in reasonably practical
> terms, just how it could be accomplished.
Uh huh. I suppose black powder is just an interesting toy of no military value,
there's no such thing as atomic decay because we can't conceive of it, and
heavier than air flight - having never been demonstrated before - is impossible,
then?
Seriously. Just because we do not today know how to do something (and if
we could explain it in reasonably practical terms, we probably could do it
today), does not mean we never will, nor that we can not see how to go
about discovering how. If you want to make such claims, the onus is upon
you to prove that it is not, in fact, possible.
There are untold number of projects using AI techniques to simulate different
parts of what the human brain can do, to different degrees of success. It
appears that the main remaining challenges are to improve those pieces, then
wire them all together. We know enough about how the human brain works
that it seems more likely than not that that will work...even if we can not
describe exactly how the end result will work right now.
This is basic stuff, man. This is what it means to develop technology.
> Unless, of course, we all take a
> minor in Ruby Slippers.
Or that, if you want to call it that. But remember, it took the man behind the
curtain to give Dorothy that power. Knowing how it worked was part of him.
The showmanship of Oz was gone before the topic came up.
We live, every day, among things our ancestors of many generations ago
would call miraculous. They don't seem that way to us merely because we
know how they work. Outside of abstract philosophical arguments that
keep getting special-cased around by reality, I am not aware of any serious
evidence that we can not create human-equivalent AIs - or even emulate
humans in silico. For instance, take the famous thought experiment where
you replace a brain with artificial neurons, one neuron at a time. We have
that capability today; it is merely impractically expensive, not impossible,
to actually conduct that experiment (say, on an animal, or a human who'd
otherwise be about to die, to avoid ethical problems). "Impractically
expensive" is the kind of thing that development tends to take care of.
More information about the extropy-chat
mailing list