[ExI] The second step towards immortality

spike spike66 at att.net
Fri Jan 10 03:28:40 UTC 2014


>... On Behalf Of Ben
Subject: Re: [ExI] The second step towards immortality

"spike" <spike66 at att.net> ha scritto:

>>..."At some future time, pre-singularity, we will likely come up with a
script which will simulate self-consciousness so well, it could convince
some people that it is self-aware (even if the writers of the code know it
is just a pile of clever code.)"

>..To me, this sounds analogous to "one day, someone will get so good at
simulating music using digital code that it could convince some people that
it really /is/ music (even if the writers know that it is just a pile of
clever code)!!.  What a jape!"  Something about ducks sounding and looking
like ducks comes to mind.  (IOW, an emulation of an information process is
an information process)  Ben Zaiboc
_______________________________________________



Yeeeeeaaano.

If we wanted to take the time, we could create a big lookup table in excel
that would sound a lot like a human trying to convince another human it is a
human.  In computer chess we argue (correctly) that the computer is actually
playing chess.  But the first several moves it really isn't.  It is just
using a lookup table, derived by humans from experience.  Of course humans
memorize the first few moves as well, so we really aren't playing chess
either.  But for computers, there is no calculation at first, just table
lookup.

We could write a spreadsheet or lookup table that could respond to nearly
any question, even if it doesn't always work very well.  It could be made
like Eliza, where it does some table lookup and some low-level synthesis by
rearranging the question into an answer when it didn't have a response from
its table.

Ben, that wouldn't be artificial intelligence.  But if we worked at it, we
might be able to fool some people into thinking it is, just as the Eliza
experiment which was run in a teen chat room, fooled the hell out of some of
the youngsters.  (I still laugh when I think of that gag.)

OK well so what if we do?  Then some people go off thinking sentient life
exists in this machine, and to turn it off would be to kill whatever that
is.  Then what?  I can see some serious ethical questions looming, because
we would be screwing with the heads of those who were convinced in this
manner, oy.

spike






More information about the extropy-chat mailing list