[extropy-chat] Fools building AIs
eugen at leitl.org
Fri Oct 6 08:40:27 UTC 2006
On Fri, Oct 06, 2006 at 04:10:48AM -0400, Ben Goertzel wrote:
> I do not know what is optimal reasoning, and I don't believe you know
> either, nor does any human.... I definitely do not find your own
> reasoning optimal...
The word optimal is meaningless without context and a metric.
> For instance, you have repeatedly claimed quite confidently that "any
> AI not provably Friendly is very very likely to wind up extremely
> unFriendly", without ever presenting any remotely convincing reasoning
> in favor of this contention ;-)
The point is that a population of uncaring high-fitness beings will cause
extinctions on a very large scale. We do. I wouldn't call it unfriendly,
because unfriendly means to actually seek out and terminate with extreme
prejudice. If you're missing reasoning for that, then I
suggest you look into evolutionary biology and human history,
and take a look out of your window.
> As for my friend's reasoning, I know him pretty well and find his
> reasoning and attitudes quite rational. However, it may be that I am
> using the word "rational" differently than you are.
Again, rational doesn't mean much without further decoration.
> I am not sure in what sense you are claiming his attitude is "irrational."
Most people tend to value life. I would call a person who's indifferent
or hostile to his own well-being at least slightly pathological.
I don't see why you're so fixated on that rational thing, whatever that is.
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 191 bytes
Desc: Digital signature
More information about the extropy-chat