[ExI] Universally versus 'locally' Friendly AI
stefano.vaj at gmail.com
Tue Mar 15 17:43:11 UTC 2011
On 13 March 2011 20:51, Ben Zaiboc <bbenzai at yahoo.com> wrote:
> Kelly Anderson <kellycoinguy at gmail.com> declared:
>> If we screw up on the first generation of AGI, then
>> humanity is toast, IMHO.
> If we screw up, and if we don't screw up, Humanity, as it is now (circa 2011), will be toast.
Most of humanity 2011 will be dead anyway before 2111. Some of them
might be killed by fellow human beings, some by very simple
mechanisms, some by AGIs, some by even more powerful and sophisticated
computers not exhibiting any kind of "AGI-like" or otherwise
anthroporphic features whatsoever.
Certainly, most will be killed by the lack of all that, or by old age.
What else is new?
The continued obsession for the Golem-like "threat" which would be
represented (to whom?) by the ethological emulation of biological
organisms running on silicon as opposed to other silicon and other
organic risks, really leaves me astonished.
More information about the extropy-chat