[ExI] ai class at stanford
kellycoinguy at gmail.com
Tue Aug 30 12:26:16 UTC 2011
On Mon, Aug 29, 2011 at 2:03 PM, Adrian Tymes <atymes at gmail.com> wrote:
> On Mon, Aug 29, 2011 at 12:11 PM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> I'm objecting, just a bit, on a technicality, to this statement... I
>> don't think we understand how anything works to 100% detail. We don't
>> know if it's all strings in 11 dimensions, or something else. What we
>> do know is how to predict things with enough accuracy to be useful and
> Fair enough. That's the sense I was going for. Though, note that that
> phrasing leaves open the possibility of objecting even if we do achieve it.
So, if we pass the Turing test, for example, without understanding
100% how humans do it, then we understand how humans talk "well
enough" to be useful and reproducible. So in this "engineering sense",
the Turing test says we understand "intelligence" to a particular
> Take, for example, me. People sometimes question my chains of logic -
> and sometimes rightly so. I'm human, nobody's perfect all the time. But
> let's say I get uploaded then make a mistake. Even if there is scads of
> evidence that the emulation is most likely perfect, won't there be a
> temptation to always declare that any mistakes I make are due to flaws
> in my emulated mind? This will be impossible to completely prove or
> disprove: even if my emulation and my original version live side by side,
> experiencing and learning much the same things, by the time any
> difference comes up, there will inevitably have been differences that could
> cause different thoughts. (For example, the exact moment we wake up,
> thought cycles devoted to use of our different physical capabilities, and
> so on.) If the uploading process is destructive and one-way, it becomes
> even harder to prove or disprove, as there won't be an original me to
> compare to.
So rather than calling this the "Turing" test, we'll call this the
"Adrian" test. If for X minutes I can't tell the difference between a
computer pretending to be Adrian, and the real Adrian, then we've
passed a further version of the Turing test that could be considered
passing a test to see if we have successfully "uploaded" Adrian. Yes,
there will be flaws in the emulated mind, and yes even if the mind is
perfectly emulated, there will be immediate divergence.
I consider the test for "not too much" divergence to be the ability to
successfully merge the experiences of the emulation back into the
original. That is, if the emulation spends a month learning Chinese,
and that learning can be successfully merged back into the original
brain such that it now speaks Chinese, and remembers learning Chinese,
then that was a successful emulation. If the divergence is too much to
reintegrate, then that would be a failed emulation with too much
divergence. I think this is technically possible, some day. Especially
if the emulation is run at 1,000,000 times the speed of the original.
Then the original doesn't have time to diverge much. I could learn a
lot of Chinese that way, no? I am reminded of the STNG episode where
Picard learns to play the flute and be a family man... he still
remembers how to play the flute afterwards.
Furthermore, are you morally responsible for anything your emulation
does? If you are religious, do you have to do penance for sins
committed by your emulation during the divergence?
> All we need is a good enough understanding to reproduce a human mind
> in silico. As demonstrated elsewhere, this need not be perfect.
> some obvious differences (especially where there are existing analogues,
> such as slightly decreased reasoning capability akin to what old people
> currently experience) might be tolerated, especially if there is a path to
> correct those differences over time (such as faster hardware), in exchange
> * the perception, by the individual and those other people and institutions
> the individual cares about (such as the law), that this is the same person,
> which requires the preservation of memory;
As a thought exercise... if you choose not to reintegrate the
divergent emulation, then you wouldn't be responsible for what it did,
it was just a virtual reality run... but if you do reintegrate, then
you are responsible???
> * a continued ability to actively influence the world (as opposed to
> "immortality through one's works" or otherwise relying exclusively on
> other people to react to what one did, without the capability to react to
> their reactions);
Again, if there is reintegration, then the emulation achieves
continued mortality from that reintegration, and dies meaninglessly if
> * and a baseline of ability at least equal to human average in the areas
> the individual cares about (movement and speech are likely to be
> required; equipment to manufacture new humans can be discarded, or
> at least removed from the shell the mind inhabits, in many cases).
I think this can be dealt with through the limits of the VR that the
emulation operates in.
More information about the extropy-chat