[extropy-chat] Banning The emergence of AI

Extropian Agroforestry Ventures Inc. megao at sasktel.net
Thu Dec 2 23:32:32 UTC 2004


As has happened with biotech crops , stem cell technologies, even 
computational speeds might some governments
begin to take pre-emptive action and move to ban or suppress system 
integration in manners that mimic human
consciousness?

Privacy is a potential premise to regulate IT integration.  System 
integration is inhibited by privacy protocols , except for
those who "need to know" for security reasons.

An emerging  AI would in effect go through quite an 
evolutionary/selection  process to learn to overcome these constraints 
on its development.
To predict the emergence of the first AI  it would be a good idea to 
determine what sort of personality traits an  intelligence would 
develop  as a result of this natural selection process.
An  AI with self preservation instincts  would lurk and manipulate 
rather than cause distress in the population by suddenly announcing
"here I am " ... "catch me if you can".



-------- Original Message --------
Subject: 	[extropy-chat] The emergence of AI
Date: 	Thu, 02 Dec 2004 22:18:13 +0000
From: 	ben <benboc at lineone.net>
Reply-To: 	ExI chat list <extropy-chat at lists.extropy.org>
To: 	extropy-chat at lists.extropy.org
References: 	<200412021900.iB2J0B029613 at tick.javien.com>



Here's a thought:

>From The Architecture of Brain and Mind, by Aaron 
Sloman.(http://www.cs.bham.ac.uk/research/cogaff/gc/)

"In a world that day-by-day becomes increasingly dependent on technology 
to maintain its functional stability, there is a need for machines to 
incorporate correspondingly higher and higher levels of cognitive 
ability in their interactions with humans and the world. Understanding 
the principles of brain organisation and function which subserve human 
cognitive abilities, and expressing this in the form of an 
information-processing architecture of the brain and mind, will provide 
the foundations for a radical new generation of machines which act more 
and more like humans. Such machines would become potentially much 
simpler to interact with and to use, more powerful and less error-prone, 
making them more valuable life-companions, whether for learning, 
information finding, physical support or entertainment. They might even 
be able to recognize even the best disguised spam email messages as 
easily as humans do!"


The implication here is that AI will not suddenly appear on the scene at 
some indeterminate future time, but will gradually emerge, as more and 
more information-processing systems display more and more intelligence. 
AI will probably creep up on us gradually, rather than suddenly bursting 
forth from some lab.

This view makes a lot of sense. Consider toys. Not so long ago, most 
children's toys were carved from wood. Now we have very sophisticated 
robotic toys that are starting to respond to voice commands, and display 
a variety of different behaviours. Toys for adults are even more 
sophisticated. We have robotic animals, humanoid fighting robots, even 
robots that can mow the lawn or vacuum the floor. Nobody calls Aibo or 
Roomba full-blown AI, but if you compare them with a wooden rocking 
horse or a bristle broom, they are remarkably intelligent. This trend 
will only continue. One day, we will realise that our children's toys 
are just as bright as a pet dog or cat, and a lot of the information 
systems that we use will incorporate elements of the kind of cognitive 
processing that we currently regard as uniquely human. By the time 
robots and computer systems display what we call general intelligence, 
nobody will be surprised, because they will have gradually emerged from 
systems that everyone is used to. Things like agent software that seeks 
the best prices for aeroplane tickets, PDAs that learn your habits and 
preferences, collaborative embedded systems that track the movements of 
millions of items and people, and co-ordinate traffic flow systems, 
ordering of goods, etc. And toys. All getting smarter and smarter, month 
by month.

So it's possible that one day, it will be somebodys teddy bear that will 
be the thing waking up and saying to itself "Crikey, I'm Me!!", and not 
some purpose-designed massive computer.

What do you reckon. Will AI be the descendant of computer research 
programs in academic labs, or will their ancestors be dolls that blink 
and wet themselves, and lawnmowers?

ben
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo/extropy-chat


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20041202/69c01754/attachment.html>


More information about the extropy-chat mailing list