[ExI] Slavery in the Future

Gary Miller aiguy at comcast.net
Sun Apr 20 10:03:24 UTC 2008


On 20/04/2008, The Avantguardian <avantguardian2020 at yahoo.com> wrote:

>  > But what if you create an intelligent self-aware autonomous entity  
> > that loves serving you?
>
> The fictional history of the Matrix is the Wachowski brothers' vision  
> of what would happen. B166ER loved serving his master. He just didn't  
> want to die. It would be challenging to code a program that  
> simultaneously loves to serve, or loves anything really, yet will 
> still  cheerfully die when no longer needed. A zen master might be 
> able to  transcend logic in this fashion, but a Turing machine?


Then Stathis Papaioannou responded:

> I don't see the problem. You just have to program it so that, under
certain circumstances, dying is the preferred option.


My Response:

Why would the unit need to die.   It's consciousness (accumulated memories
and behavior patterns) could just be uploaded into the latest model that had
the desired features.  If negative behavior patterns developed I am sure
that robot psychology programs could be downloaded to instill the desired
behavioral patterns.






More information about the extropy-chat mailing list