[extropy-chat] Bluff and the Darwin award

Samantha Atkins sjatkins at mac.com
Thu May 18 00:03:15 UTC 2006


On May 17, 2006, at 1:34 PM, Russell Wallace wrote:

> On 5/17/06, Samantha Atkins <sjatkins at mac.com> wrote:
> You seem to be confusing Singularity with some particular extremely  
> utopian or dystopian outcomes.  I am not using the word in that  
> sense and I don't think many others in this thread are either.  I  
> don't expect a deity to manifest on Earth as part and parcel of  
> Singularity.
>
> Eliezer effectively does though, and it's his view that I was  
> arguing against here.

Actually I think he abandoned his Sysop type positions quite some  
time ago and even that was no deity.  What I was referring to was  
Singularity as what comes of the emergence of >human intelligence  
(Vinge singularity) as opposed to a variety of other notions.

>
> To recap, my position is:
>
> 1) The Singularity is a fine way of thinking about the distant  
> future, a state of affairs we may ultimately reach.
>

 >human intelligence is arguably not at all "distant".

> 2) However, we are not close enough to be able to make meaningful,  
> specific predictions about questions like what will society be like  
> during and after the Singularity.
>
> 3) Therefore it is not good to turn it into a political football at  
> this time, because we lack the data to base a political debate on  
> fact and reason, and such a debate based on conjecture and emotion  
> is likely to be counterproductive.
>

Haven't I already agreed a good way back that AGI regulation would be  
quite wrong at this time?  Are you attempting in insist that I agree  
with your reasoning and means (denial of Singularity) for opposing  
this?  If so you can save yourself from carpal tunnel.  It ain't  
gonna happen.  :-)

> There are various memes floating around for versions of the idea  
> that Singularity can be achieved in a short timescale by currently  
> known processes. I'm arguing that none of these are realistic; the  
> post you're replying to here was the one in which I present my  
> reasons for believing that Eliezer's version involving recursive  
> self-improvement, in particular, is unrealistic. I do this  
> reluctantly, since when I studied the question of hard takeoff it  
> was not with a view to disproving it. But I find the conclusion  
> inescapable nonetheless.
>
>> Hard takeoff = a process by which seed AI (of complexity buildable  
>> by a team of very smart humans in a basement) undertakes self- 
>> improvement to increase its intelligence, without needing mole  
>> quantities of computronium, _without needing to interact with the  
>> real world outside its basement_, and without needing a long time;  
>> subsequently emerging in a form that is already superintelligent.
>>
>
> Who says this will happen with no interaction with the rest of the  
> world on the part of the seed AI?
>
> Eliezer has said it can so happen, and you yourself support this  
> conjecture in the next few sentences:
>
> As you know much has been written about the difficulty of keeping  
> the AI isolated sufficiently from the world.   But I have no reason  
> to consider self-improvement or even really clever hacking utterly  
> insufficient to reach super-intelligence in that "basement" with  
> just the corpus of information reachable in read-only mode from the  
> internet and reasonably good self-improving code.   The latter is  
> imho key if in fact humans are incapable of designing the  
> components of a super-intelligence.
>
> The concept of "self-improving code" is one of the big pitfalls in  
> reasoning about this area (the other being the concept of  
> "intelligence" as a formal property of a collection of bits). Let's  
> stop to dissect it for a bit.
>
> When we think about self-improving code, the image that always  
> comes to our minds (not just mine, this is true of the examples  
> people post when the idea is discussed) is that of tweaking code to  
> produce the _same output_ using fewer CPU cycles or bytes of memory.
>

Not true.  Improvement is not limited to just getting the same answer  
faster but includes getting better answers in some measurable way and  
also includes other things such as better learning and abstraction  
from inputs.

> I emphasize those words because they are the crux of this issue.  
> Improving an AI's performance isn't about producing the same output  
> faster, it's about producing _different output_ from the same  
> input. In other words, it's not just about changing the code, but  
> changing the _specification_, and that's a completely different  
> thing. As you know yourself, changing the specification with a  
> reasonable assurance that the result will be an improvement isn't  
> the sort of thing that can be done by a smart compiler. It's  
> something that requires domain (not just programming) expertise,  
> and real-world testing - which is what I've been saying.
>

If it can be done by any intelligence, such as a biological human,  
then it can be done by other comparatively competent  
intelligences.    I was not talking of your above constrained notion  
of self-improvement which I believe was obvious from my earlier post.

> Why do you think computronium is required?   It was not required to  
> get to human intelligence obviously.
>
> Yes it was. A single human brain contains more than an exaflop's  
> worth of fault-tolerant self-rewiring nanotech computronium (just  
> looking at the neurons alone, nevermind the as yet unquantified  
> contributions of the glial cells, peripheral nervous system and  
> rest of the body). And getting to human intelligence took a large  
> population of such entities over millions of years of interaction  
> with and live testing in the real world.
>

There is nowhere to go if you are going to put forth  such  
meaningless arguments as claiming human intelligence depends on  
computronium.

> Not being restricted to blind Darwinian evolution, it shouldn't  
> take us millions of years to create AI; but the requirement for  
> interaction with the real world isn't going to go away.

Any kind of interaction is sufficient for learning dependent on  
interaction.  It need not be with the "real" world.  But that was  
hardly the point, was it?

>
>> 1) I've underlined the most important condition, because it  
>> exposes the critical flaw: the concept of hard takeoff assumes  
>> "intelligence" is a formal property of a collection of bits. It  
>> isn't. It's an informal comment on the relationship between a  
>> collection of bits and its environment.
>
> Huh?  It is you who posited a completely isolated environment that  
> cannot be breached.
>
> Actually the concept has been floating around over on SL4 since  
> long before I joined the list; I was just summarizing.
>

Actually SL4 has talked a lot about such isolation being impossible  
to maintain indefinitely and as being a safety measure rather than a  
design criteria.


> An initial environment does not have to include the entire world in  
> order for the intelligence to grow.
>
> Sure. That doesn't change the fact that the AI will depend on an  
> environment and the rate at which it learns will depend on the rate  
> at which it can do things in that environment. The reason I'm  
> emphasizing this is to refute Eliezer's idea that the AI can learn  
> at the rate at which transistors switch between 0 and 1,  
> independent of the real world.

I find your argument on this weak.

>
> Yes and no.  What is the real world constraint on building and  
> testing software?
>
> The hard part is the specification: knowing what output your  
> program should be producing in the first place.
>

This was not the point of the question.  Given a set of desirability  
criteria and ability to self-modify and test the results it is  
possible to optimize the system.   The desirability criteria can be  
quite broad and include things like extracting valid information and  
abstractions from inputs.


> Yes if you want to solve engineering problems you eventually need  
> to get beyond the design tools and simulation but these are still  
> highly critical.  Actually building and testing a new solution can  
> be done by human beings from the design the AI came up with.  Where  
> is the crucial problem?
>
> Design tools and simulation are critical yes, and actually building  
> and testing solutions by humans is also critical (I don't expect an  
> early-stage AI to be able to effectively control robots).
>
> As long as these things are available, there is no crucial problem.  
> Please bear in mind that I'm not saying strong AI isn't possible -  
> on the contrary, I believe it is. I'm just listing the requirements  
> for it.
>

OK.  I still don't see these requirements (the ones I agree with)  
taking more than a decade or two.

- samantha


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060517/6fa768c2/attachment.html>


More information about the extropy-chat mailing list