[extropy-chat] Fools building AIs (was: Tyranny in place)

Nathan Barna nlbarna at gmail.com
Fri Oct 6 21:52:08 UTC 2006


Eugen Leitl wrote:
> On Fri, Oct 06, 2006 at 11:53:30AM -0400, Ben Goertzel wrote:
>
> > No... it just doesn't GUARANTEE a value being placed on self-preservation...
>
> I still don't know what you mean when you use 'rational'.
> http://en.wikipedia.org/wiki/Rationality says several things

http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/Reason/ReasonRationality.htm

Additionally, this seems like a good paper, nicely reflective of the
conflict. While it's slightly more sympathetic to Ben's position, the
other side is raised to the bar if we assume that existential threats
are involved and that technology as decision-making assistance and
agency, to accommodate human handicaps against the Standard Picture,
is plausible. In other words, if this were 1900 and hopeless, this
paper would be more relevant to its purpose. Not that it isn't
relevant to its purpose, it just would be more so if it had enlarged
the context.

I doubt anyone's believing that it's possible to predict eternity. No
one we know can or is processing reality, after all. The question is
about rationality's maximum effectiveness, presuming 2006 and
prospective techniques – the awareness of which could make it
/incoherent/ to deny normative ideals such as SP, and their potential
power for either sensical stability or nonsensical danger – and
whether it's exclusively better to account for it as a genius or
genius-fool, as it were.




More information about the extropy-chat mailing list