[ExI] malevolent machines

Kelly Anderson kellycoinguy at gmail.com
Wed Apr 9 14:48:35 UTC 2014


On Tue, Apr 8, 2014 at 5:14 PM, William Flynn Wallace
<foozler83 at gmail.com>wrote:

> We all know that all of our machines are out to get us and frustrate us in
> various ways, but is it really possible?
>

Yes. If it is designed with human-like intelligence, then it will have
human-like potential, including the potential to compete with us, or
prevent us from shutting it off, etc. I would recommend "The Corbin
Project" (1972) as a good movie to get the idea of the sort of logic that
could occur.


> In scifi there are dozens of books about entire machine cultures which, of
> course, are the enemies of humans and maybe all living things.
>

Unless we fail to put compassion into computer's training, they would have
the potential to be more compassionate than we. Skipping mirror neurons and
spindle cells would be a potentially fatal mistake on the part of
scientists.


> Unless there is something really important that I don't know about
> computers, it seems to me that having a machine 'wake up' like Mike in
> Heinlein's The Moon is a Harsh Mistress, is just absurd.  It does what it
> is programmed to do and cannot do anything else.  Any other function is
> just some sort of mystical belief that is paradoxically held by hard
> scientists.  Comments?  billw
>

Bill,

  Most people who study consciousness believe that it is an emergent
quality. Emergence is mysterious in some ways, but well understood in
others. One thing that is required for emergence is a large number of
component elements that are similar. Think ants in an ant colony. Not very
interesting when there are 100 ants, but give me 10,000,000 ants, and I
have a force to be reckoned with that can even bring down mid sized mammals.

  In artificial intelligence, there used to be a race going on between the
rote programmers and the connectionists, the neural network folks, the
learning folks. I think that war has been won by the learning machines. The
google autonomous car is a learning machine. Watson is a learning machine
(though perhaps slightly less so).

  The argument follows that obtaining human levels of functionality,
including perhaps consciousness, is simply a function of getting enough
elements together in a sufficiently emergent environment that allows
learning. Then poof, you get consciousness. Maybe.

  Following this reasoning, there is nothing "mystical" about a machine
with enough parts acting in a learning fashion "waking up". I think it
would be something that would happen over a period of years, like a baby.
Do you think babies are conscious? Or do you think consciousness happens as
they learn and "wake up"? If that doesn't do it for you, go back to the
embryo. It's obvious that at some point we "wake up" and get more and more
awake from that time forward. Why should an intelligent learning machine be
any different?

  Now, the idea that a machine would wake up all at once, in one day, is
possible ONLY if the machine were so powerful that it can do all the
computation that it takes a baby several months or years to accomplish
within the time period in question. It seems unlikely that this would be
our FIRST experience with machine consciousness.

-Kelly
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140409/42d78991/attachment.html>


More information about the extropy-chat mailing list