[extropy-chat] Fools building AIs

Mike Dougherty msd001 at gmail.com
Fri Oct 6 12:42:46 UTC 2006


On 10/5/06, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:
>
> I don't understand your "what if".  What if what?  What if the above is
> the actual outcome?  (Answer: it's a complex scenario with no specific
> support given so it's very improbable a priori.)  What about the above
> as an optimal solution from a humane standpoint?  (Answer: it seems
> easier to conceive of third alternatives which are better.)
>

in response to:
> I asked a friend recently what he thought about the prospect of
> superhuman beings annihilating humans to make themselves more
> processing substrate...
>
> His response: "It's about time."
>
> When asked what he meant, he said simply, "We're not so great, and
> we've been dominating the planet long enough.  Let something better
> have its turn."

true the scenario is unlikely - as much as superhuman beings would ever
annihilate humans.  There is a much greater likelihood that we will
annihilate ourselves first.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20061006/f75c3210/attachment.html>


More information about the extropy-chat mailing list