[ExI] How could you ever support an AGI?

ABlainey at aol.com ABlainey at aol.com
Fri Mar 7 20:40:32 UTC 2008


In a message dated 07/03/2008 02:57:47 GMT Standard Time, rpwl at lightlink.com 
writes:


> Unfortunately, I think I was not clear enough, and as a result you have 
> misunderstood what I said in rather a substantial way.
> 

More than likely, it was late and I think my view of your post was clouded by 
reading previous posts. On reading it again I would agree that my response 
wasn't quite in line with it, but I think we may disagree on some issues.

> When you build an AGI, you *must* sort out the motivation mechanism 
> ahead of time, or the machine will simply not work at all.  You don't 
> build an AGI and *then* discover what its motivation is.
> 

Agreed in principle. However I still subscribe to the idea that the end 
result AGI will have unpredictable motivation regardless of it's starting point. 
I'll say again that the development is a stochastic process unless we code every 
single line by hand, spoon feed its knowledge base and fully understand all 
possible outcomes of the system to the Nth degree.
This is unrealistic and impossible to achieve even in today's Non-AI software 
let alone in developing an AGI. Imagine a self learning Windows! 


> If you do not understand the motivation system before you build it, then 
> it will not work, as simple as that.

Agreed. But as above, we cannot know in advance what the AGI will decide to 
do even if we can control its motivations.

> 
> The reason why many people do talk as if a future AGI will have a 
> "surprise" motivation system is that today's AI systems are driven by 
> extremely crude and non-scalable "goal-stack" control systems, which are 
> great for narrow-AI planning tasks, but which become extremely unstable 
> when we imagine using them in a full-blown AGI.

> 
> But when people imagine an extended form of goal-stack drive system 
> controlling a future AGI, they fail to realise that the very same 
> instability that makes the AGI seem so threatening will also make it so 
> unstable that it will never actually become generally intelligent.

The methodology used to build the AI/AGI in the first place is irrelevant to 
the finished AGI. There are infinite ways to mathematically get from 1 to 100 
and likewise infinite ways in which an AGI could rewrite its motivational 
code. Just because our ability to code AI is limited by preconceptions, personal 
aptitude, knowledge of coding and excepted methodology such as "goal-stack" 
control systems. Does not mean the AGI will follow our limited rules. So we 
cannot predict the outcome.

> 
> The bottom line:  you cannot make statements like "An ...[AGI]... could 
> and probably will do major damage", because there is no "probably" about 
> it.  You either set out to make it do damage and be intelligent at the 
> same time (an extremely difficult combination, in practice, for reasons 
> I have explained elsewhere), or you don't!  There is no surprise.
> 

The full quote ended with 'probably not through design or desire, but just 
through exploration of ability or pure accident,' which is the important bit. 
If my car is fitted with autobrakes which apply when closing on a stationary 
vehicle, they might not stop the car from running over a dog. The point being 
that any AGI must have explicit rules or ability to stop it doing something in 
order to be safe.
If we give the AGI a basic motivation to 'learn all it can,' We must add an 
exception that it cannot learn what happens to humans when dropped into a vat 
of acid.
We can overcome this to an extent by blanket rules, but the basic premise is 
still the same. If an AGI can cause damage, it will.
I have seen first hand, many times what happens when AI systems come across 
situations where they have no explicit rules for the situation. I once watched 
(from a distance) a CNC machine throwing 3 metre steel bars across a workshop, 
simply because a part off tool broke. This was a very dangerous example of a 
very simple AI following an equally simple rule and nearly killing someone.
As with an AGI, all I could do was stop and stare, then wait for the dust to 
settle.

> For someone to talk about discovering an AGI's motivation after the fact 
> would be like a company building a supersonic passenger jet and then 
> speculating about whether the best way to fly it is nose-forward or 
> nose-backward.
> 
> 
> 
> Richard Loosemore
> 

The jet could not reconfigure its aerodynamics after assemble, if it could, 
perhaps it would fly better backwards ;o) What's more, many planes are already 
aerodynamically capable of flying backwards. But that's a whole different 
fishkettle. Ask a group of well educated pilots what makes a plane fly and you 
would be surprised at the answers LOL.

Alex   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080307/f34e9024/attachment.html>


More information about the extropy-chat mailing list