[ExI] uploads again

John Clark johnkclark at gmail.com
Fri Dec 28 18:08:19 UTC 2012


On Fri, Dec 28, 2012  Rafal Smigrodzki <rafal.smigrodzki at gmail.com> wrote:

> mentioning "limitations", "constraints", or "rules" makes it so easy to
> fall into the anthropomorphizing trap when trying to imagine how an AI
> works.
>

I don't know what "the anthropomorphizing trap" means,  but I do know that
anthropomorphizing can be a very useful tool.

> An AI that had the ego-syntonic goal of paperclip-maximizing (i.e. had no
> neural subsystems capable of generating a competing goal) would not
> spontaneously discard this goal.


But such a thing would not be a AI it would be a APMM, a Artificial
Paperclip Making Machine, and it's hard to see why humans would even bother
to try to make such a thing, we already have enough machines that make
paperclips.

> I hold David Deutsch in great regard but I doubt that such goal-limited
> intelligence would be uncreative


I too hold David Deutsch in great regard but on the subject of AI he
believes in some very strange things:

http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/

Some of the things he says seem to be flat out factually untrue, such as:

  "today in 2012 no one is any better at programming an AGI than Turing
himself would have been. <http://Yet today in 2012 no one is any better at
programming an AGI than Turing himself would have been.>"

By "AGI" I think he means AI not Adjusted Gross Income, if so then the
statement is ridiculous.

John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121228/f9925ff5/attachment.html>


More information about the extropy-chat mailing list