[ExI] extropy-chat Digest, Vol 86, Issue 51

Keith Henson hkeithhenson at gmail.com
Mon Nov 29 17:01:04 UTC 2010


On Mon, Nov 29, 2010 at 5:00 AM,   Brent Allsop
<brent.allsop at canonizer.com> wrote:

> On 11/28/2010 4:45 PM, Keith Henson wrote:
>> On Fri, Nov 26, 2010 at 5:00 AM,  Michael Anissimov
>> <michaelanissimov at gmail.com>  wrote:
>>> On Fri, Nov 19, 2010 at 11:18 AM, Keith Henson<hkeithhenson at gmail.com>wrote:
>>>
>>>> Re these threads, I have not seen any ideas here that have not been
>>>> considered for a *long* time on the sl4 list.
>>>>
>>>> Sorry.
>>> So who won the argument?
>> I was not aware that it was an argument.  In any case, "win the
>> argument" in the sense of convincing others that your position is
>> correct almost never happens on the net.
> I think this should be rephrased to be "almost never YET happens", and
> it does happen.  Sure, it's not going to happen over the time period of
> this discussion, but over 20 years?  And also, when it does happen it's
> nice to know why, when, who, and to have a definitive way to track all
> such.  In other words, I believe it would be great to rigorously measure
> for just how much consensus there is, and how fast as it changing, and
> in what direction.

I don't know if consensus will mean much in predicting the emergence
of AI.  Historical analogies are always suspect, but I don't think
there was a lot of consensus on aircraft prior to the Wright brothers
building and testing one.

> And of course, reality well eventually converts everyone, or falsifies
> the wrong camps.  If an AI launches tomorrow and wipes out half of
> humanity before we overcome it,

Wiping out half seems a lot less likely than a clean sweep or none at
all.  It is just the mathematical nature of exponential growth.  That
one packet virus that I mentioned infected all possible hosts in a
time too short for humans to react..

> obviously those in the 'wrong' camp
> would be converted.  And obviously those that have worried about
> unfriendly AI, and spent any time and effort during the last 10 years,
> have completely wasted their time for the foreseeable future.  (i.e.
> more or less for every dollar we waste, instead of spending it on
> achieving immortal life, another person will fail to make it into the
> immortal heavenly future and could rot in the grave for the rest of
> eternity that would have otherwise made it.)
>
>>
>>> If there's no consensus, then there's always plenty more to discuss.
>>>
>>> Contrary to consensus, we have people in the transhumanist community calling
>>> us cultists and as deluded as fundamentalist Christians.
>> That's funny since most of the world things the transhumists are
>> deluded cultists.
>>
>
> This is where it is critical to distinguish between the experts, and the
> general population.  The experts will always be in the minority, and
> will almost always have a very different POV than the general
> population.  To the degree that you track this, and definitively show
> how much worse the not experts are, compared to the experts, people will
> obviously learn more to trust the experts, sooner.

Maybe it should be that way.  But it seems kind of unlikely.  Consider
evolution, full of experts and rejected by a huge segment of the
population (in the US).

> Also, it helps if
> experts collaborate to sound like a unified voice, for at least as many
> as there are, on the moral issues they agree on - instead of always
> sounding no different than the rest of the loner crazy people.
>
Brent, I know maybe a dozen people in this area.  I can't think of any
two of them who agree on anything substantial.

Keith



More information about the extropy-chat mailing list