[extropy-chat] A few points of interest for future historians

spike spike66 at comcast.net
Sun Sep 25 01:00:22 UTC 2005


> -----Original Message-----
> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-
> bounces at lists.extropy.org] On Behalf Of The Avantguardian
> Sent: Saturday, September 24, 2005 4:34 PM
> To: ExI chat list
> Subject: Re: [extropy-chat] A few points of interest for future historians
> 
> 
> 
> I am inclined to agree with Spike on this one. I don't
> belong to SL4 or any of the other lists. I don't have
> a lot of knowledge of AI jargon and research. I do
> however have an appreciation of Bayesian inference...


My earlier comment should in no way be construed
as agreement with Marc Geddes' notions.  I consider
them in fact irresponsible and dangerous.  One can
almost hear the muwaahahahahaaa mad scientist in
the posts.

However I can see the value in knowing what 
dangerous and irresponsible looks like in AI, so
we need not discourage posts on the topic.

Nowthen, I know little about AI, so multiply my
next comments by the appropriate K sub C factor.

Humanity is still somehow missing something fundamental
in the creation of AI.  We are about at the place where
the alchemists were in the 18th century, struggling to
just find the right formula or recipe.  By that time,
a lot of the chemists who studied what had been done
already suspected there was something very fundamental
about certain materials that was not known.  Eventually
the periodic chart came along, then Bohr model of the 
atom, and today we could theoretically create gold
if we want.

We are at the pre-periodic-chart stage in AI.  We are 
seeing many really smart people study for a scientific
lifetime with little to show for it.

Many in the 18th century worried about the
disaster that would result if anyone managed to
create gold, since the world's economies depend
upon its scarcity, but they needn't have concerned
themselves.  Likewise I don't think a lone mad
scientist is going to stumble upon the recipe for
AI today.  If she did, however, I see no justification
for assuming it would be friendly.  I can easily imagine
reasons why an emergent AI would be hostile however. 

spike





More information about the extropy-chat mailing list