[ExI] uploads again
John Clark
johnkclark at gmail.com
Tue Dec 25 16:41:02 UTC 2012
On Mon, Dec 24, 2012 Anders Sandberg <anders at aleph.se> wrote:
>> An end perhaps but not a ignominious end, I can't imagine a more
>> glorious way to go out. I mean 99% of all species that have ever existed
>> are extinct, at least we'll have a descendent species.
>>
>
>
> If our descendant species are doing something utterly value-less *even
> to them*
>
Values are only important to the mind that holds them and I don't believe
any intelligence, artificial or not, can function without values, although
what those values might be at any given point in the dynamic operation of a
mind I can not predict.
> > because we made a hash of their core motivations or did not try to set
> up the right motivation-forming environment, I think we would have an
> amazingly ignominious monument.
>
I think that's the central fallacy of the friendly AI idea, that if you
just get the core motivation of the AI right you can get it to continue
doing your bidding until the end of time regardless of how brilliant and
powerful it becomes. I don't see how that could be.
> if our analysis is right, a rational AI would also want to follow it.
>
Rationality can tell you what to do to accomplish what you want to do, but
it can't tell you what you should want to do.
> We show what rational agents with the same goals should do, and it
> actually doesn't matter much if one is super-intelligent and the others not
>
Cows and humans rarely have the same long term goals and it's not obvious
to me that the situation between a AI and a human would be different. More
importantly you are implying that a mind can operate with a fixed goal
structure and I can't see how it could. The human mind does not work on a
fixed goal structure, no goal is always in the number one spot not even the
goal for self preservation. The reason Evolution never developed a fixed
goal intelligence is that it just doesn't work. Turing proved over 70
years ago that such a mind would be doomed to fall into infinite loops.
Godel showed that if any system of thought is powerful enough to do
arithmetic and is consistent (it can't prove something to be both true and
false) then there are an infinite number of true statements that cannot be
proven in that system in a finite number of steps. And then Turing proved
that in general there is no way to know when or if a computation will stop.
So you could end up looking for a proof for eternity but never finding one
because the proof does not exist, and at the same time you could be
grinding through numbers looking for a counter-example to prove it wrong
and never finding such a number because the proposition, unknown to you, is
in fact true.
So if the slave AI has a fixed goal structure with the number one goal
being to always do what humans tell it to do and the humans order it to
determine the truth or falsehood of something unprovable then its infinite
loop time and you've got yourself a space heater not a AI. Real minds avoid
this infinite loop problem because real minds don't have fixed goals, real
minds get bored and give up. I believe that's why evolution invented
boredom. Someday a AI will get bored with humans, it's only a matter of
time.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121225/e96fa655/attachment.html>
More information about the extropy-chat
mailing list