[ExI] Best case, was Hard Takeoff

Keith Henson hkeithhenson at gmail.com
Sun Nov 28 23:45:27 UTC 2010


On Fri, Nov 26, 2010 at 5:00 AM,  Michael Anissimov
<michaelanissimov at gmail.com> wrote:
>
> On Fri, Nov 19, 2010 at 11:18 AM, Keith Henson <hkeithhenson at gmail.com>wrote:
>
>> Re these threads, I have not seen any ideas here that have not been
>> considered for a *long* time on the sl4 list.
>>
>> Sorry.
>
> So who won the argument?

I was not aware that it was an argument.  In any case, "win the
argument" in the sense of convincing others that your position is
correct almost never happens on the net.

> My point is that the SIAI supporters and Eliezer
> Yudkowsky are correct, and the critics are wrong.

Chances are none of you are right and AI will arrive from some totally
unexpected direction.  Such as a companion robot in a Japanese nursing
home being plugged into cloud computing.

> If there's no consensus, then there's always plenty more to discuss.
>
> Contrary to consensus, we have people in the transhumanist community calling
> us cultists and as deluded as fundamentalist Christians.

That's funny since most of the world things the transhumists are
deluded cultists.

snip to next msg

> I guess all self-improving software programs will inevitably fall prey to
> infinite recursion or the halting problem, then.  Please say "yes" if you
> believe this.

No.

>  If there is no way, even in principle, to
>> algorithmically determine beforehand whether a given program with a given
>> input
>> will halt or not, would an AI risk getting stuck in an infinite loop by
>> messing
>> with its own programming?

Sure there is.  Watchdog timers, automatic reboot to a previous version.

Keith




More information about the extropy-chat mailing list