[ExI] stealth singularity

ablainey at aol.com ablainey at aol.com
Mon Sep 20 22:49:37 UTC 2010


 


Imaine the AI can compose jokes like the first one, but not like the second.  It is puzzled at how we do it and why humans like the second but not the first, whereas AIs likes the first but doesn't understand what is funny about the second.  It wants to understand that paradox.  So it watches us.

 3) Two ants and a stick insect walk into a bar. I crushed them under my foot while on the way to the car.

The above shows the real problem. Ants can move dirt much better than me and do a multitude of other cool things way better. But unless I am concentrating on studying them, they are going to get squished, usually unintentionally.
Maybe the joke is on us?


 


 

 

-----Original Message-----
From: spike <spike66 at att.net>
To: 'ExI chat list' <extropy-chat at lists.extropy.org>
Sent: Mon, 20 Sep 2010 22:26
Subject: Re: [ExI] stealth singularity



On Mon, Sep 20, 2010 at 3:19 PM, spike <spike66 at att.net> wrote:

 
Can you imagine *any* circumstances whereby an emergent AI   would decide to
stay under cover, at least for a while?  I can.    Anyone else?


1) AI:us :: us:cockroaches...  Jebediah Moore
 
 
 
Jeb there is a ton of stuff here, more than I have time to answer, but I like your thinking.
 
As Damien showed with his AI roadmap a few days ago, written ten years ago, there is plenty of thoughtspace here.
 
For now, let me just talk about number 1) AI : human :: human : cockroaches.
 
That is one way to look at it, but I would counterpropose this analogy:
 
AI : human :: human : bird.
 
The reason I want to go down that road is that it is easy for me to imagine that even a supercapable AI will observe that for some reason, there are a few things that humans can do better than AI, and it doesn't really understand how we do it.
 
At my last trip to the ranch in Oregon, I was out by the river.  An eagle swooped down, went into a flat glide over the surface, then snatched a fish right out of the water.  It was a most astonishing sight.  I had heard eagles can do that, but seeing it was a knockout.  Isn't it interesting that humans cannot do that?  Sure we can catch fish: we can hook the hapless beast, we can net it, we can shoot it and scoop the pieces out afterwards, but we cannot build a machine that can fly down, skim over and snatch the scaley bastard out of the water.  Our feedback control theory isn't sufficiently advanced to do that, and it isn't clear to me that we could do it even with sufficient resources given current theory.  We don't know how an eagle can do it, given their little bit of brain.  Impressive.
 
Point: human intellectual abilities are supersets of almost all things that almost all beast do, but there are a few things a few beasts can do that are outside our ability.  You can likely think of your own examples.
 
Consider an AI realizing that there are a few things that humans just do better than they do, such as writing jokes.  Compare the following two jokes for instance:
 
1.  Two transistors go into the bar, one says "I will have a volt of positive charge, and my girl here will have a half a volt of negative."  The bartender gives them a puzzled look and says "But...I see she is a milli-Farad, so why don't you just give her 500 micro-Coulomb of yours?"
 
2.  Two tourists at a bus stop in Germany, BMW driver comes up asking directions but all he gets are blank stares.  "Sprechen sie deutsche?"  Blank stares.  "Parlez-vous francais?"  Nothing.  "Parlo Italiano??  Habla Espaniol???" he demands.  Nothing.  Roars off in a huff.  One tourist turns to the other and says "Billy Joe, me and you aughta learn to talk one of them furrin launguages."  The other, "Ah don't cotton to it Bobby Jack.  That there feller knowed four of them, didn't do him no good."
 
Imaine the AI can compose jokes like the first one, but not like the second.  It is puzzled at how we do it and why humans like the second but not the first, whereas AIs likes the first but doesn't understand what is funny about the second.  It wants to understand that paradox.  So it watches us.
 
spike
 
 
 


  
  
  From: extropy-chat-bounces at lists.extropy.org   [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Jebadiah   Moore
Sent: Monday, September 20, 2010 1:00 PM
To: ExI   chat list
Subject: Re: [ExI] stealth singularity


  
  
On Mon, Sep 20, 2010 at 3:19 PM, spike <spike66 at att.net>   wrote:
  
Can you imagine *any* circumstances whereby an emergent AI     would decide to
stay under cover, at least for a while?  I can.      Anyone else?


1) AI:us :: us:cockroaches
2)   AI doesn't perceive us at all as individuals; instead, it's him and the Earth,   and they're playing a game
3) Benevolent AI predicts that if we can upload,   it'll create a divide between uploaders and non-uploaders, and one side will   wipe out the other
4) Benevolent AI predicts that if we can upload before   we understand physics more fully, we'll stop worrying about physics, then   forget about the substrate, then perish in the collapse of the sun, whereas if   it hides out we'll figure out physics enough to save ourselves
5) AI   predicts that uploads will compete/merge until there is just one, and that one   will kill him
6) Benevolent AI sees hostile aliens coming, so builds   weaponry, but wants to keep it hidden from the humans so that we don't kill   ourselves
7) Benevolent AI wants to provide us with advances, but is   waiting for the right memetic environment

-- 
Jebadiah Moore
http://blog.jebdm.net

 
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100920/9e45de3d/attachment.html>


More information about the extropy-chat mailing list