[ExI] How could you ever support an AGI?
ABlainey at aol.com
ABlainey at aol.com
Mon Mar 3 01:11:48 UTC 2008
In a message dated 02/03/2008 18:57:00 GMT Standard Time,
robert.bradbury at gmail.com writes:
>I have not posted to the list in some time. And due to philosophical
differences I will >not engage in open discussions (which are not really open!).
>
>But a problem has been troubling me recently as I have viewed press releases
for >various AI conferences.
>I believe the production of an AGI spells the extinction of humanity. More >
importantly it has what I would call back propagating effects. Why should iI
expend >ntellectual energy, time, money, etc. in a doomed species? Put
another way, those >of you who have had and/or are investing in children are
potentially pursuing a >pointless endeavor. If an AGI develops or is developed
their existence is fairly >pointless. Our current culture obviously shows
absorption is nearly instantaneous >for younger minds. They will know they are
"obsolete" in an AGI world.
>So given some limited genetic drive to keep making humans, that will last a
while. >But I see no way out of the perspective that the general development
(vs the >managed >development) of an AGI leads to the survival of humanity.
>
>And so, we must present transhumanism as an "Extinction Level Event" -- are
>willing to deal with thiat?
>
>Robert
Hello again Robert (been a long time).
I have similar concerns as yourself and I am heavily leaning toward AGI=very
bad. In a fairly recent poll of opinions regarding AGI's for Bruce Klien of
Novamente. I voiced some of my concerns. The main backbone which these fears are
framed around, is the lack of hormonal or chemical influence on an AGI. A
subject which I raised many years ago.
Personally I have continued to invest in children mainly in the hope that my
unified general theory of relativity finally falls together and can be easily
applied to get my off this rock, just before the singularity occurs.
If all goes well I may return. If not, then at least my home made
interstellar cryo chamber won't need topping up every few weeks.
There is always hope....... hopefully.
Alex
Copy of my reply to Bruce pasted below.
>Alex, quick question... when do you think
AI will surpass human-level intelligence?
[ ] 2010-20
[ ] 2020-30
[ ] 2030-50
[ ] 2050-70
[ ] 2070-2100
[ ] Beyond 2100
[ ] Prefer not to make predictions
[ ] Other: __</BLOCKQUOTE>
>
Hi Bruce,
Im not sure I would agree with the question itself, but If you want my honest
answer, then it really isn’t as simple as a tickin the box time frame. The
truth is that AI already surpasses human intelligence in many areas. As for the
fields of intelligence where AI does not equal or surpass human ability, this
is really an issue of ‘lack of application’ rather than lack of applicable
technology.
I am sure that if the people you have asked this question to are the usual
suspects, then you will receive many in-depth calculations of comparative
computation, so I will skip the maths to prove the point.
So what it boils down to is this:
When will we finally put all the relevant technology together in one box, to
create an AI that surpasses the average human intelligence?
My answer to this would be 2020-30. Unless there is a major world economic
upset in the next decade, which is a distinct possibility. In which case I would
push it to 2030-50.
However I would add a strong caveat and warning.
If we do not put all the technology together in one box in a systematic and
controlled manner, at some point it will happen spontaneously, through pure
chance or accident. The internet being a prime example of opportunity for this to
occur. When it happens, and it will. We will have no control, insight or
warning. We (Homosapiens) will instantly become obsolete. The ramifications of
this are impossible to predict.
As if this isn’t bad enough, A spontaneously formed AI will have far superior
information gathering skills, strategic analysis, will know our entire
knowledge base (Including all the utter rubbish on Wikipedia) and will be completely
devoid of ‘natural hormonal control’ which in short means no emotions,
fears, wants, needs or empathy for anyone or thing, including itself.
An Intelligence of this magnitude with a global reach into just about every
control system on the planet could and probably will do major damage. Although
probably not through design or desire, but just through exploration of ability
or pure accident.
When would I put a time frame on this happening?
2020-30
So as you can see, I think the singularity is going to happen quite soon,
whether we want it to or not. It sounds like I am a Doomsayer, but far from it.
When you are going to be hit in the head,you generally see it coming and have
the chance to duck. The race to the singularity is already well underway and so
the real question is: Will we be in control?
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080302/2329b0e4/attachment.html>
More information about the extropy-chat
mailing list