[ExI] Hard Takeoff

John Grigg possiblepaths2050 at gmail.com
Mon Nov 15 02:55:02 UTC 2010


I must admit that I yearn for a hard take-off singularity that
includes the creation a nanny sysop who gets rid of poverty, disease,
aging, etc., and looks after every human on the planet, but without
establishing a tyranny.  I'm not a kid anymore, and so like many
transhumanists, I want this to happen at the very latest by 2050, and
hopefully a decade before that date! lol  And so I hang on Ray
Kurzweil's every word and hope his predictions are correct.  And just
as I wonder if I will make it, I really wonder if *he* will survive
long enough to see his beloved Singularity!

I envision a scenario where a hard take-off Singularity happens in
2040.  I am transformed back into a young man, but with very enhanced
abilities, by an ocean of advanced nanotech swarming the world, and
develop a limited mind meld with the rest of humanity.  A Singularity
sysop avatar in the form of a gorgeous nude woman appears to me.  My
beautiful AI companion and I make love while in orbit and she quickly
gives birth to our child.  We raise it together as we watch the Earth,
society, and the solar system radically transform.  I will soon embark
on exploring the universe with my family.  The experience as I
visualize it is one part 2001, and another part Heavy Metal.

Anyway, any Singularity I experience may not be quite as cool & corny
as the one I picture, but for whatever it is worth, this is what I
would like.

Now I will go back to watching my favorite music video...

http://www.youtube.com/watch?v=-X69aDIFFsc

John  : )


On 11/14/10, Brent Allsop <brent.allsop at canonizer.com> wrote:
>
> Hi Michael,
>
> Yes, it is fun to see you back on this list.
>
> I'm still relatively uneducated about arguments for a "Hard Takoff".
> Thanks for pointing these out, and I've still got lots of study to fully
> understand them.  Thanks for the help.
>
> Obviously there is some diversity of opinion about the importance of
> some of these arguments.
>
> It appears this particular hard takoff issue could be a big reason for
> our difference of opinions about the importance of friendliness.
>
> I think it would be great if we could survey for this particular hard
> takeoff issue, and find out how closely the break down of who is on
> which side of this issue matches the more general issue of the
> importance of Friendly AI and so on.
>
> We could even create sub topics and rank the individual arguments, such
> as the ones you've listed here, to find out which ones are the most
> successful (ie, acceptable to more people) and which ones are most
> important.
>
> I'll add my comments below to be included with Allan's and your POV.
>
> Brent Allsop
>
>
> On 11/14/2010 12:32 PM, Alan Grimes wrote:
>> chrome://messenger/locale/messengercompose/composeMsgs.properties:
>>> <mailto:stefano.vaj at gmail.com>>  wrote:
>>
>>> 1.  ability to copy itself
>> Sufficiently true.
>>
>> nb: requires work by someone with a pulse to provide hardware space,
>> etc... (at least for now).
>>
> Michael.  Is your ordering important?  In other words, for you, is this
> the most important argument compared to the others?  If so, I would
> agree that this is the most important argument compared to the others.
>
>>> 2.  stay awake 24/7
>> FALSE.
>> Not implied. The substrate does not confer or imply this property
>> because an uploaded mind would still need to sleep for precisely the
>> same reasons a physical brain does.
> I would also include the ability to fully concentrate 100% of the time.
> We seem to be required to do more than just one thing, and to play, have
> sex... a lot.  In addition to sleeping.  But all of these, at best, are
> linear differences, and can be overcome by having 2 or 10... times more
> people working on a particular problem.
>
>>> 3.  spin off separate threads of attention in the same mind
>> FALSE.
>> (same reason as for 2).
>>
>>> 4.  overclock helpful modules on-the-fly
>> Possibly true but strains the limits of plausibility, also benefits of
>> this are severely limited.
>>
>>> 5.  absorb computing power (humans can't do this)
>> FALSE.
>> Implies scalability of the hardware and software architecture not at all
>> implied by simply residing in a silicon substrate, indeed this is a
>> major research issue in computer science.
> I probably don't fully understand what you mean by this one.  To me, all
> computer power we've created so for is only because we can utilize /
> absorb / or benefit from all of it, at least as much as any other
> computer would.
>
>>> 6.  constructed from scratch with self-improvement in mind
>> Possibly true but not implied.
>>
>>> 7.  the possibility of direct integration with new sensory modalities,
>>> like a codic modality
>> True, but not unique, the human brain can also integrate with new
>> sensory modalities, this has been tested.
>
> What is 'codic modality'?  We have significant diversity of knowledge
> representation abilities as compared to the mere ones and zeros of
> computers.  I.E. we represent wavelengths of visible light with
> different colors, wavelengths of acoustic vibrations with sound,
> hotness/coldness for different temperatures, and so on.  And we have
> great abilities to map new problem spaces into these very capable
> representation systems as can  be seen by all the progress in field of
> scientific data representation / visualization.
>
>>> 8.  the ability to accelerate its own thinking speed depending on the
>>> speed of available computers
>> True to a limited extent, also Speed is not everything.
>
> I admit that the initial speed difference is huge.  But I agree with
> Alan  that we make up with parallelism and many other things, what we
> lack in speed.  And, we already seem to be at the limit of hardware
> speed - i.e. CPU speed has not significantly changed in the last 10
> years right?
>>> When you have a human-equivalent mind that can copy itself, it would be
>>> in its best interest to rent computing power to perform tasks.  If it
>>> can make $1 of "income" with less than $1 of computing power, you have
>>> the ingredients for a hard takeoff.
>> Mostly true. Could, would, and should being discreet questions here.
>>
> I would agree that a copy-able human level AI would launch a take-off,
> leaving what we have today, to the degree that it is unchanged, in the
> dust.  But I don't think acheiving this is going to be anything like
> spontaneous, as you seem to assume is possible.  The rate of progress of
> intelligence is so painfully slow.   So slow, in fact, that many have
> accused great old AI folks like Minsky as being completely mistaken.
>
> I also think we are on the verge of discovering how the phenomenal mind
> works, represents knowledge, how to interface with it in a conscious
> way, enhance it and so on.  I think such discoveries will greatly speed
> up this very slow process of aproaching human level AI.
>
> And once we achieve this, we'll be able to upload ourselves, or at least
> fully consciously integrate ourselves / utilize all the same things
> artificial systems are capable of, including increased speed, copy
> ability, ability to not sleep, and all the others.  In other words, I
> believe anything computers can do, we'll also be able to do within a
> very short period of time after first achieved.  The maximum time limit
> between when AI would get it, and when we would also acheive the same
> abilities, would be very insignificant compared to any rate of overall
> AI progress.
>
>
> Brent Allsop
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>



More information about the extropy-chat mailing list