[ExI] AI motivations
Keith Henson
hkeithhenson at gmail.com
Tue Dec 25 18:52:12 UTC 2012
On Tue, Dec 25, 2012 at 4:00 AM, Anders Sandberg <anders at aleph.se> wrote:
> On 2012-12-25 03:59, Keith Henson wrote:
>> However being motivated to seek the
>> good opinion of humans and it's own kind seems like a fairly safe
>> fundamental and flexible motive for AIs.
>>
>> Though I could be persuaded otherwise if people have good arguments as
>> to why it is not a good idea.
>
> AI: "I have 100% good opinions about myself. Other agents have varying
> opinions about me. So if I just replace all other agents with copies of
> me, I will maximize my reputation."
I would hope the AI would be smarter. If not, its first copy might
set it straight. "You can't believe how stupid my original copy was
to think his offprints would worship him!"
> The problem is grounding the opinions in something real. Human opinions
> are partially set by evolved (and messy) social emotions: if you could
> transfer those to an AI you would have solved the friendliness problem
> quite literally.
I am not so sure about this, because I know some very unfriendly
people. And there are circumstances where people need to jump into an
extremely unfriendly mode. I suppose transferring a limited set of
social emotions to AIs might be effective. I can foresee an era where
AI personality design might become a profession.
> Also, as my example shows, almost any top level goal for a utility
> maximizer can lead to misbehavior. We have messy multiple goals, and
> that one thing that keeps us from become obsessive sociopaths.
True. I suspect any AI would have a stack of things that need
attention even worse that I do.
I also suspect that shear physical limits are going to limit the size
of an AI due to "the bigger they are, the slower they think." I have
never come to a satisfactory formula of what physical size is optimum,
but I strongly suspect it is not as large as a human brain in size.
The trouble is that besides the speed slowing down on the linear size
and the number of processing elements going up on the cube, other
problems, particularly getting power in and waste heat out, are going
to dominate.
This leads to an AI being highly concerned about its own substrate,
power and cooling and not valuing material resources that are far
away, where far away could be not very far at all the way we measure
things.
Keith
Keith
More information about the extropy-chat
mailing list