[ExI] What might be enough for a friendly AI?.
Samantha Atkins
sjatkins at mac.com
Fri Nov 19 17:26:20 UTC 2010
On Nov 18, 2010, at 11:15 AM, Dave Sill wrote:
> 2010/11/18 John Clark <jonkc at bellsouth.net>:
>> When people talk about friendly AI they're not really talking about a
>> friend, they're talking about a slave, and they idea that you can
>> permanently enslave something astronomically smarter than yourself is nuts.
>
> I disagree. It's pretty easy to contain things if you're careful. A
> moron could have locked Einstein in a jail cell and kept him there
> indefinitely.
Did you ever read the series of challenges for the thought experiment of keeping such an AGI locked up? If not I suggest you do so. Comparing Einstein or any other human to an AGI is a major error.
- s
More information about the extropy-chat
mailing list