[ExI] Safety of human-like motivation systems

Samantha Atkins sjatkins at mac.com
Fri Feb 4 20:05:06 UTC 2011

On 02/03/2011 10:09 PM, John Clark wrote:
> On Feb 3, 2011, at 3:50 PM, Samantha Atkins wrote:
>> What it did to is show that for the domain of formally definable 
>> mathematical claims in a closed system using formalized logic that 
>> there are claims that cannot be proven or disproven.  That is a bit 
>> different to than saying in general that there are countless claims 
>> that cannot be proven or disproven
> What Goedel did is to show that if any system of thought is powerful 
> enough to do arithmetic and is consistent (it can't prove something to 
> be both true and false) then there are an infinite number of true 
> statements that cannot be proven in that system in a finite number of 
> steps.

Yes, in that sort of system.
>> and that you can't even tell when you are dealing with one.
> And what Turing did is prove that in general there is no way to know 
> when or if a computation will stop. So you could end up looking for a 
> proof for eternity but never finding one because the proof does not 
> exist, and at the same time you could be grinding through numbers 
> looking for a counter-example to prove it wrong and never finding such 
> a number because the proposition, unknown to you, is in fact true. So 
> if the slave AI must always do what humans say and if they order it to 
> determine the truth or falsehood of something unprovable then its 
> infinite loop time and you've got yourself a space heater.

It is not necessary that a computation stop/terminate in order for 
useful results to ensue that does not depend on such termination.

Why would an FAI bother looking for such a proof for eternity exactly?

An AGI/FAI is not a slave to human requests / commands.

> So there are some things in arithmetic that you can never prove or 
> disprove, and if that’s the case with something as simple and 
> fundamental as arithmetic imagine the contradictions and ignorance in 
> more abstract and less precise things like physics or economics or 
> politics or philosophy or morality. If you can get into an infinite 
> loop over arithmetic it must be childishly easy to get into one when 
> contemplating art. Fortunately real minds have a defense against 
> this, but not fictional fixed goal minds that are required for a AI 
> guaranteed to be "friendly"; real minds get bored. I believe that's 
> why evolution invented boredom.

Arithmetic/math has more rigorous construction that may or may not 
include all valid/useful ways of deciding questions.   A viable  FAI or 
AGI is not a fixed goal mind.   So you seem to be raising a bit of a 
>> Actually, your argument assumes:
>> a) that the AI would take the find a counter example path as it only 
>> or best path looking for disproof;
> It doesn't matter what path you take because you are never going to 
> disprove it because it is in fact true, but you are never going to 
> know its true because a proof with a finite length does not exist.

Then how do you know that it is "in fact true"?  Clearly there is some 
procedure by which one knows this if you do know it.

>> b) that the AI has nothing else on its agenda and does not take into 
>> account any time limits, resource constraints and so on.
> That's what we do, we use our judgment in what to do and what not to 
> do, but the "friendly" AI people can't allow a AI to stop obeying 
> humans on its own initiative, that's why its a slave, (the politically 
> correct term is friendly).

FAI theory does not hinge on, require or mandate that the AI obey 
humans, especially not slavishly and stupidly.  If a human new what was 
really the best in all circumstances in order to order the FAI in this 
matter with best outcomes then we would not need the FAI.

>> An infinite loop is a very different thing that an endless quest for 
>> a counter-example.  The latter is orthogonal to infinite loops. An 
>> infinite loop in the search procedure would simply be a bug.
> The point is that Turing proved that in general you don't know if 
> you're in a infinite loop or not; maybe you'll finish up and get your 
> answer in one second, maybe in 2 seconds, maybe in ten billion years, 
> maybe never.

A search that doesn't find a desired result is not an infinite loop 
because no "loop" in involved.  Do you consider any and all 
non-terminating processes to be infinite loops?  Is looking for the 
largest prime (yes, I know there provably isn't one), an infinite loop 
or just a non-terminating search?  Do you distinguish between them?

- samantha

More information about the extropy-chat mailing list