[ExI] Watson on NOVA
spike
spike66 at att.net
Tue Feb 15 19:08:38 UTC 2011
... On Behalf Of Richard Loosemore
Subject: Re: [ExI] Watson on NOVA
spike wrote:
...
>> Nuclear bombs preceded nuclear power plants.
>The problem is, Spike, that you (like many other people) speak of AI/AGI as
if the things that it will want to do (its motivations) will only become
apparent to us AFTER we build one...
Rather I would say we can't be *completely sure* of its motivations until
after it demonstrates them.
But more critically, AGI would be capable of programming, and so it could
write its own software, so it could create its own AGI, more advanced than
itself. If we have programmed into the first AGI the notion that it puts
another species (humans) ahead of its own interests, then I can see it
creating a next generation of mind children, which it puts ahead of its own
interests. It isn't clear to me that our mind-children would put the our
interests ahead of those of our mind-grandchildren, or that our mind-great
grandchildren would care about us, regardless of how we program our mind
children.
I am not claiming that AGI will be indifferent to us. Rather only that once
recursive AI self-improvement begins, it is extremely difficult, perhaps
impossible for us to predict where it goes.
>So, you say things like "It will decide it doesn't need us, or just sees no
reason why we are useful for anything."
>This is fundamentally and devastatingly wrong.
In this Richard, I hope you are fundamentally and devastatingly right. But
my claim is that we do not know this for sure, and the stakes are enormous.
> You are basing your entire AGI worldview on a crazy piece of accidental
black propaganda that came from science fiction...
Science fiction does tend toward the catastrophic. That's Hollyweird, it's
how they make their living. But in there is a signal: beware, be very very
ware, there is danger in AI that must not be ignored. With the danger comes
unimaginable promise. But with the promise, danger.
>...In fact, their motivations will have to be designed, and there are ways
to design those motivations to make them friendly.
Good, glad to hear it. Convince me please. Also convince me that our
mind-children's mind children, which spawn every few trillion nanoseconds,
will not evolve away that friendliness. We are theorizing evolution in fast
forward.
>...And, I would not hire a gang of computer science students: that is
exactly the point. They would be psychologists AND CS people, because only
that kind of crowd can get over these primitive mistakes. Richard Loosemore
OK good. Of course psychologists study human motivations based on human
evolution. I don't know how many of these lessons would apply to a
life-form which can evolve a distinct new subspecies while we slept last
night. I do fondly hope your optimism is justified.
spike
More information about the extropy-chat
mailing list