[ExI] Unfrendly AI is a mistaken idea.
sjatkins at mac.com
Sun Jun 17 18:16:12 UTC 2007
On Jun 17, 2007, at 1:54 AM, Stathis Papaioannou wrote:
> On 17/06/07, Samantha Atkins <sjatkins at mac.com> wrote:
> Actually something more personally frightening is a future where no
> amount of upgrades or at least upgrades available to me will allow me
> to be sufficiently competitive. At least this is frightening an a
> scarcity society where even basic subsistence is by no means
> guaranteed. I suspect that many are frightened by the possibility
> that humans, even significantly enhanced humans, will be second class
> by a large and exponentially increasing margin.
> I don't see how there could be a limit to human enhancement. In
> fact, I see no sharp demarcation between using a tool and merging
> with a tool. If the AI's were out there own their own, with their
> own agendas and no interest in humans, that would be a problem. But
> that's not how it will be: at every step in their development, they
> will be selected for their ability to be extensions of ourselves. By
> the time they are powerful enough to ignore humans, they will be the
You may want to read Hans Moravec's book' Robot: Mere Machine to
Transcendent Mind . Basically it comes down to how much of our
thinking and conceptual ability is rooted in our evolutionary design
and how much we can change and still be remotely ourselves rather than
a nearly complete AI overwrite. Even as uploads if we retain our 3-D
conceptual underpinnings we may be at a decided disadvantage in
conceptual domains where such is at best a very crude approximation.
An autonomous AI thinking a million times or more faster than you is
not a "tool". As such minds become possible do you believe that all
instances will be constrained to being controlled by ultra slow human
interfaces? Do you believe that in a world where we have to argue to
even do stem cell research and human enhancement is seen as a no-no
that humans will be enhanced as fast as more and more powerful AIs are
developed? Why do you believe this if so?
> In those
> circumstances I hope that our competition and especially Darwinian
> models are not universal.
> Darwinian competition *must* be universal in the long run, like
> entropy. But just as there could be long-lasting islands of low
> entropy (ironically, that's what evolution leads to), so there could
> be long-lasting islands of less advanced beings living amidst more
> advanced beings who could easily consume them.
I disagree. Our darwinian competition models and notions are too
tainted by our own EP imho. I do not think it is the only viable or
inevitable model for all intelligences. But I have no way to prove
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat