[ExI] Universally versus 'locally' Friendly AGI

Samantha Atkins sjatkins at mac.com
Tue Mar 8 22:13:46 UTC 2011


On 03/08/2011 11:52 AM, Stefano Vaj wrote:
> On 8 March 2011 19:28, Kelly Anderson<kellycoinguy at gmail.com>  wrote:
>> If ever there were a case for anthropormorphization, future AGIs are
>> it. They might (probably will be) heavily modeled after us. They may
>> eventually come to be seen as us, or our offspring at least. And yes
>> some of them will BE us.
> Basically, this is anthopormphisation by definition, since it has been
> clarified by now that much more "intelligent" computers would not be
> considered as AGI unless they exhibit the kind of human-like behaviour
> allowing them to compete with the Real Thing in Turing tests.

This must be tongue in cheek.  A true AGI may very well flunk the Turing 
Test.  Not by being too stupid or being less capable but by being 
unwilling to dumb itself down to such an asinine level.  I doubt very 
much that passing as human will be high on the priority list.
> As a consequence, AGIs will probably not be the most intelligent
> entities around. They will simply be those amongst them who take
> themselves as humans or as humans' children.

Then why even bother.  If AGIs are not much much better than that in 
potential then whatever is the point?

- samantha




More information about the extropy-chat mailing list