[ExI] ai class at stanford

Adrian Tymes atymes at gmail.com
Mon Aug 29 20:03:18 UTC 2011


On Mon, Aug 29, 2011 at 12:11 PM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
> I'm objecting, just a bit, on a technicality, to this statement... I
> don't think we understand how anything works to 100% detail. We don't
> know if it's all strings in 11 dimensions, or something else. What we
> do know is how to predict things with enough accuracy to be useful and
> reproducible.

Fair enough.  That's the sense I was going for.  Though, note that that
phrasing leaves open the possibility of objecting even if we do achieve it.

Take, for example, me.  People sometimes question my chains of logic -
and sometimes rightly so.  I'm human, nobody's perfect all the time.  But
let's say I get uploaded then make a mistake.  Even if there is scads of
evidence that the emulation is most likely perfect, won't there be a
temptation to always declare that any mistakes I make are due to flaws
in my emulated mind?  This will be impossible to completely prove or
disprove: even if my emulation and my original version live side by side,
experiencing and learning much the same things, by the time any
difference comes up, there will inevitably have been differences that could
cause different thoughts.  (For example, the exact moment we wake up,
thought cycles devoted to use of our different physical capabilities, and
so on.)  If the uploading process is destructive and one-way, it becomes
even harder to prove or disprove, as there won't be an original me to
compare to.

All we need is a good enough understanding to reproduce a human mind
in silico.  As demonstrated elsewhere, this need not be perfect.  Even
some obvious differences (especially where there are existing analogues,
such as slightly decreased reasoning capability akin to what old people
currently experience) might be tolerated, especially if there is a path to
correct those differences over time (such as faster hardware), in exchange
for:

* the perception, by the individual and those other people and institutions
the individual cares about (such as the law), that this is the same person,
which requires the preservation of memory;

* a continued ability to actively influence the world (as opposed to
"immortality through one's works" or otherwise relying exclusively on
other people to react to what one did, without the capability to react to
their reactions);

* and a baseline of ability at least equal to human average in the areas
the individual cares about (movement and speech are likely to be
required; equipment to manufacture new humans can be discarded, or
at least removed from the shell the mind inhabits, in many cases).



More information about the extropy-chat mailing list