[extropy-chat] Timescale to Singularity

The Avantguardian avantguardian2020 at yahoo.com
Fri Jun 17 07:00:03 UTC 2005



--- Marc Geddes <marc_geddes at yahoo.co.nz> wrote:

> For a variety of reasons I think 2030 is a very
> conservative, realistic and achieveable target for
> Singularity.

Not a terribly bad target date. I personally don't
think it will happen until at least 2035 but I think
it will have definately have happened by 2050.

  
> I base my target date on comments made by leading AI
> researchers such as Wilson, Yudkowsky and Goertzel
> (who all stated on various messageboards that the
> problem of general intelligence was now mostly
> solved and it is only the Friendliness problem that
> now awaits solution.

Good luck on that. If you can't even get people to get
along with each other, how are you going get a
super-intelligence to get along with them?


> In any event, there is also evidence that other
> technologies such as nano-technology or
> bio-technology should be reaching maturation by 2030
> and the dangers posed by these technologies require
> AGI to handle saftely.  

Oh hell no! No MACHINE is going to tell me what I can
or can't do in the lab. I can't even stand it when the
President tries to tell me what I can or can't
research and at least he has enough imagination to lie
to me. I tell you what. If your A.I. can let me have
the first move and still beat me at go then I will
start doing what it says. Until then, it stays the
hell out of my business. 
  
> Finally, for me personally, I'm 33 now and by 2030
> I'd be 58, which is too old to be sure of my further
> survival without major advances in bio-tech
> (survival rates plummet after one turns 60).

If you aren't ready to die any second, what chance do
you think you have to live forever?

> There are no books and papers for them to copy the 
> solution to morality
> from, so I predict that they're up the creek without
> a paddle when it comes to morality ;)   Serves them
> right really.   Monumental arrogance displayed to
> others, over-confidence, too much egoism etc.
  
Alright. You place your hopes on being able to program
the A.I. to be nice to you. I'll place my efforts on
being smarter and more clever than the A.I. That's
just good advice from my friend Chuck Darwin.







The Avantguardian 
is 
Stuart LaForge
alt email: stuart"AT"ucla.edu

"The surest sign of intelligent life in the universe is that they haven't attempted to contact us." 
-Bill Watterson

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 



More information about the extropy-chat mailing list