[ExI] Hard Takeoff

John Grigg possiblepaths2050 at gmail.com
Mon Nov 15 05:27:36 UTC 2010


Brent Allsop wrote:
I would agree that a copy-able human level AI would launch a take-off,
leaving what we have today, to the degree that it is unchanged, in the
dust.  But I don't think acheiving this is going to be anything like
spontaneous, as you seem to assume is possible.  The rate of progress
of intelligence is so painfully slow.   So slow, in fact, that many
have accused great old AI folks like Minsky as being completely
mistaken.
>>>

Michael Annisimov replied:
There's a huge difference between the rate of progress between today
and human-level AGI and the time between human-level AGI and
superintelligent AGI.  They're completely different questions.  As for
a fast rate, would you still be skeptical if the AGI in question had
access to advanced molecular manufacturing?
>>>

I agree that self-improving AGI with access to advanced manufacturing
and research facilities would probably be able to bootstrap itself at
an exponential rate, rather than the speed at which humans created it
in the first place.  But the "classic scenario" where this happens
within minutes, hours or even days and months seems very doubtful in
my view.

Am I missing something here?

John


On 11/14/10, Michael Anissimov <michaelanissimov at gmail.com> wrote:
> Hi Brent,
>
> On Sun, Nov 14, 2010 at 4:12 PM, Brent Allsop
> <brent.allsop at canonizer.com>wrote:
>>
>>
>>>  Michael.  Is your ordering important?  In other words, for you, is this
>> the most important argument compared to the others?  If so, I would agree
>> that this is the most important argument compared to the others.
>
>
> It wasn't meant to be, but I think copying is really important, yes.
>
>
>> I would also include the ability to fully concentrate 100% of the time.
>> We
>> seem to be required to do more than just one thing, and to play, have
>> sex...
>> a lot.  In addition to sleeping.  But all of these, at best, are linear
>> differences, and can be overcome by having 2 or 10... times more people
>> working on a particular problem
>>
>
> There may be second-order benefits from being able to concentrate longer.
>  To get from one node of an argument or problem to another might require a
> certain amount of sustained attention, for instance.  Any idea requiring
> longer than 20 or so hours of sustained continuous attention would be
> inaccessible to humanity.
>
>
>> I probably don't fully understand what you mean by this one.  To me, all
>>> computer power we've created so for is only because we can utilize /
>>> absorb
>>> / or benefit from all of it, at least as much as any other computer
>>> would.
>>
>>
> I mean integrating it directly into its brain.  For instance, imagining me
> doubling the amount of processing power in my retina and visual cortex,
> allowing me to see a much wider range of patterns and detail in the world,
> just because I chose to add more computing power to it.  Or imagine giving
> more computing power to the concept-manipulating parts of the brain that
> surely exist but are only understood on a moderate level today.  It's hard
> to say how important it is until we try, but the ability to add computing
> power directly to the brain is something no animal has ever had, so it's
> definitely something interesting and potentially important.
>
>
>>
>>
>>  6.  constructed from scratch with self-improvement in mind
>>>>
>>> Possibly true but not implied.
>>>
>>>  7.  the possibility of direct integration with new sensory modalities,
>>>> like a codic modality
>>>>
>>> True, but not unique, the human brain can also integrate with new
>>> sensory modalities, this has been tested.
>>>
>>
>> What is 'codic modality'?  We have significant diversity of knowledge
>> representation abilities as compared to the mere ones and zeros of
>> computers.  I.E. we represent wavelengths of visible light with different
>> colors, wavelengths of acoustic vibrations with sound, hotness/coldness
>> for
>> different temperatures, and so on.  And we have great abilities to map new
>> problem spaces into these very capable representation systems as can  be
>> seen by all the progress in field of scientific data representation /
>> visualization.
>
>
> I hazard to say it's not the same as having a modality custom-crafted for
> the specific niche.  We can map all this great stuff, but in something that
> requires skill and getting it right the first time, it's not the same as
> having the neural hardware.  Really spectacular martial artists probably
> have "better" motor cortex than us in some ways.  Parkinson's patients have
> a "worse" substantia negra that leads to pathology.  Really good artists
> probably have slightly "better" brain sections corresponding to visualizing
> images.  These variations take place entirely within the space of human
> possibilities, and they're still substantial.  Imagine neurobiological
> differences going significantly beyond the human norm.
>
>
>> I admit that the initial speed difference is huge.  But I agree with Alan
>>>  that we make up with parallelism and many other things, what we lack in
>>> speed.  And, we already seem to be at the limit of hardware speed - i.e.
>>> CPU
>>> speed has not significantly changed in the last 10 years right?
>>
>>
> It has:
>
> http://en.wikipedia.org/wiki/Megahertz_myth
>
> Of course, people have different opinions based on what they're trying to
> sell, but by and large Moore's law has kept going:
>
> http://cosmiclog.msnbc.msn.com/_news/2010/08/31/5012834-researchers-rescue-moores-law
> http://www.engadget.com/2010/05/03/nvidia-vp-says-moores-law-is-dead/
>
>
>> I would agree that a copy-able human level AI would launch a take-off,
>> leaving what we have today, to the degree that it is unchanged, in the
>> dust.
>>  But I don't think acheiving this is going to be anything like
>> spontaneous,
>> as you seem to assume is possible.  The rate of progress of intelligence
>> is
>> so painfully slow.   So slow, in fact, that many have accused great old AI
>> folks like Minsky as being completely mistaken.
>>
>
> There's a huge difference between the rate of progress between today and
> human-level AGI and the time between human-level AGI and superintelligent
> AGI.  They're completely different questions.  As for a fast rate, would you
> still be skeptical if the AGI in question had access to advanced molecular
> manufacturing?
>
> --
> michael.anissimov at singinst.org
> Singularity Institute
> Media Director
>



More information about the extropy-chat mailing list