[extropy-chat] Bluff and the Darwin award

Samantha Atkins sjatkins at mac.com
Wed May 17 17:53:50 UTC 2006


On May 16, 2006, at 1:39 PM, Russell Wallace wrote:

> On 5/16/06, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:
> It is rather reminiscent of someone lecturing me on how, if I don't
> believe in Christ, Christ will damn me to hell.  But Christians  
> have at
> least the excuse of being around numerous other people who all believe
> exactly the same thing, so that they are no longer capable of noticing
> the dependency on their assumptions, or of properly comprehending that
> another might share their assumptions.
>
> *ahem* Given that in both cases I'm the one _failing_ to believe in  
> the imminent manifestation of deity on Earth via never-observed  
> processes, I think your analogy has one of those "minus sign  
> switched with plus sign" bugs ;)

You seem to be confusing Singularity with some particular extremely  
utopian or dystopian outcomes.  I am not using the word in that sense  
and I don't think many others in this thread are either.  I don't  
expect a deity to manifest on Earth as part and parcel of Singularity.

>
> And you, Russell, need not list any of the negative consequences of
> believing in a hard takeoff when it doesn't happen, to convince me  
> that
> it would be well not to believe in it supposing it doesn't happen.
>
> Then perhaps what I need to do is explain why it won't happen?  
> Okay, to make sure we're on the same page, a definition:
>
> Hard takeoff = a process by which seed AI (of complexity buildable  
> by a team of very smart humans in a basement) undertakes self- 
> improvement to increase its intelligence, without needing mole  
> quantities of computronium, _without needing to interact with the  
> real world outside its basement_, and without needing a long time;  
> subsequently emerging in a form that is already superintelligent.
>

Who says this will happen with no interaction with the rest of the  
world on the part of the seed AI?   As you know much has been written  
about the difficulty of keeping the AI isolated sufficiently from the  
world.   But I have no reason to consider self-improvement or even  
really clever hacking utterly insufficient to reach super- 
intelligence in that "basement" with just the corpus of information  
reachable in read-only mode from the internet and reasonably good  
self-improving code.   The latter is imho key if in fact humans are  
incapable of designing the components of a super-intelligence.    Why  
do you think computronium is required?   It was not required to get  
to human intelligence obviously.

> 1) I've underlined the most important condition, because it exposes  
> the critical flaw: the concept of hard takeoff assumes  
> "intelligence" is a formal property of a collection of bits. It  
> isn't. It's an informal comment on the relationship between a  
> collection of bits and its environment.

Huh?  It is you who posited a completely isolated environment that  
cannot be breached.  An initial environment does not have to include  
the entire world in order for the intelligence to grow.

>
> Suppose your AI comes up with a new, supposedly more intelligent  
> version of itself and is trying to decide whether to replace the  
> current version with the new version. What algorithm does it use to  
> tell it whether the new version is really more intelligent? Well it  
> could apply some sort of mathematical criterion, see whether the  
> new version is faster at proving a list of theorems, say.
>

Or however long a list of skills you consider markers of intelligence  
that you care to name.  But all of that would not be the point.  If  
the AI can improve the very basis of intelligence, the very  
components of its mind, then it will become more intelligent over  
time.  Your argument is not convincing that this is impossible.

> But the most you will ever get out of that process is an AI that's  
> intelligent at proving mathematical theorems - not because you  
> didn't apply it well enough, but because that's all the process was  
> ever trying to give you in the first place. To create an AI that's  
> intelligent at solving real world problems - one that can cure  
> cancer or design a fusion reactor that'll actually work when it's  
> turned on - requires that the criterion for checking whether a new  
> version is really more intelligent than the old one, involves  
> testing its effectiveness at solving real world problems. Which  
> means the process of AI development must involve interaction with  
> the real world, and must be limited in speed by real world events  
> even if you have a building full of nanocomputers to run the software.
>

Yes and no.  What is the real world constraint on building and  
testing software?  Yes if you want to solve engineering problems you  
eventually need to get beyond the design tools and simulation but  
these are still highly critical.  Actually building and testing a new  
solution can be done by human beings from the design the AI came up  
with.  Where is the crucial problem?


> There are other problems, mind you:
>
> 2) Recursive self-improvement mightn't be a valid concept in the  
> first place. If you think about it, our reason for confidence in  
> the possibility of AI and nanotechnology is that we have - we are -  
> existence proofs. There is no existence proof for full RSI. On the  
> contrary, all the data goes the other way: every successful complex  
> system we see, from molecular biology to psychology, law and  
> politics, computers and the Internet etc, are designed as a stack  
> of layers where the upper layers are easy to change, but they rest  
> on more fundamental lower ones that typically are changed only by  
> action of some still more fundamental environment.
>

I don't see this as valid.  Code can be optimized and re-factored by  
code.  Code can write, profile, redesign and rewrite code.    If  
humans can improve code with our obvious lack of specialization for  
such tasks I have no reason to believe that it is impossible to  
capture the relevant knowledge and design a software system to do so.

>
> The matter has been put to the test: Eurisko was a full RSI
> AI. Result: it needed constant human guidance or it'd just wedge  
> itself.
>

So one early effort has exhausted the entire problem space?

> 3) On to the most trivial matter, computing power: there's been a  
> lot of discussion about how much it might take to run a fully  
> developed and optimized AI, and the consensus is that ultimate  
> silicon technology might suffice... but that line of argument  
> ignores the fact that it takes much more computing power to develop  
> an algorithm than it does to run it once developed.

How is this relevant?  Are you including all discovery of the problem  
space, design efforts, checking the results and so on?  So what?    
Does this mean it is impossible?  Not at all.
>
> Bottom line, I think we'll need molecular computers to develop AI -  
> and not small prototype ones either.
>

I don't.

> So that's one reason why the concept is definitely incoherent, an  
> independent reason why there's good reason to believe it's  
> incoherent, and a third again independent reason why it wouldn't be  
> practical in the foreseeable future even if it were coherent.


That you think a highly contentious thing is true (but you cannot  
prove it) cannot be used to say "definitely" that something else is  
the case.     Your entire post amounts to you merely saying it can't  
be done without proving any such thing or imho even making a good case.

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060517/9c5aca86/attachment.html>


More information about the extropy-chat mailing list