[ExI] Kelly's future

Kelly Anderson kellycoinguy at gmail.com
Thu May 26 20:49:52 UTC 2011


On Mon, May 23, 2011 at 3:47 PM, Damien Sullivan
<phoenix at ugcs.caltech.edu> wrote:
> On Mon, May 23, 2011 at 03:22:02PM -0600, Kelly Anderson wrote:
>
>> Once achieved, an AGI is easily replicated. That much I will grant
>> you. But mixing explicit programming with a training process is very
>> difficult. Just look at how hard it is to change people. Changing me
>> from a religious zealot to an atheist was a very painful process that
>> took a couple of years of hard work. It was not just "changing the
>> programming", although that was, in a sense, exactly what it was.
>
> While the results of machine learning may well not be easily modifiable
> or reverse-engineerable, using people as evidence isn't very good.  We
> don't have explicit programming of people, or of brains, the way we do
> have of computers.  Verbal instruction is a limited ability, compared to
> being able to go in and change neural wiring directly.  Not that we'd
> know much what to do if we could, but we don't even have the safe access
> for people.  Whereas even a genetically evolved neural network mess of
> code is completely open to our examination and modification.  Knowing
> what to do is another matter, but the fact that we can't do things to
> people through their skulls is kind of irrelevant.

Yes, what we can do to people is irrelevant.

My underlying assumption here is that AGIs of the future will
(initially at least) be based upon the human intelligence model, and
will, in fact, be silicon based human brain emulators. This assumption
is quite possibly false. However, given this assumption, I think what
I've been saying makes sense. Sorry I didn't make that assumption
explicit earlier on.

In this sense, AGIs are not "computers" in the sense that you use the word.

-Kelly




More information about the extropy-chat mailing list