[ExI] From Arms Race to Joint Venture

William Flynn Wallace foozler83 at gmail.com
Tue Oct 16 15:16:12 UTC 2018


How do you get to understanding a mind?  Antonio Damasio's latest book
(rather a long time since Descarte's Error) starts with bacteria and works
up.  I am not through, but it is very clear that no intellectual decisions
are made without emotional input, maybe even guided mostly by emotion.

Now AIs are getting capable of reading emotions, thanks to Paul Ekman and
his extensive studies of facial muscles and expressions.  So suppose we sit
down and look through a catalog of clothing, say, and the AI gets input
from cameras aimed at our faces.  Soon, if not already, the AI will know
which ones we want.  It can also read created, aka faked, expressions, so
we can't fool it.  Might make some police work a lot easier.  Maybe we will
all go around wearing masks so the AIs of the world can't read out
intentions.

This sort of thing is already here.  For some of the implications, read the
Harari book I mentioned earlier.

But if you read Behave, as I extremely highly recommended earlier, esp. the
appendix on the neuron, it is very complicated at the level of the cells.
And so he concludes as a three word explanation of behavior "It is
complicated."

But on the surface, despite millions of synapses in many parts of our body
going crazy, behavior at the level of the facial expression may be very
simple and predictable.

Moral as I see it - study people to understand people.  Neurons are
fascinating, but we are not going to understand global behavior that way,
and trying to build a computer program based on neuron connections is the
wrong way to go.

bill w

On Tue, Oct 16, 2018 at 10:02 AM Dave Sill <sparge at gmail.com> wrote:

> On Tue, Oct 16, 2018 at 8:29 AM Zero Powers <zero.powers at gmail.com> wrote:
>
>> The AI alignment, or "friendly AI," problem is not soluble by us. We
>> cannot keep a God-like intelligence confined to a box, and we cannot impose
>> upon it our values, even assuming that there are any universal human values
>> beyond Asimov's 3 laws.
>>
>
> You've made assertions but provided no evidence of them or even
> definitions of the terms, so debating them is difficult. I don't think
> "godlike intelligence" is equivalent to omnipotence. Intelligence isn't
> really that powerful all by itself; it's got to be combined with knowledge
> and the ability to interact with other intelligences/systems in order to
> effect change. A "perfect" intelligence in a box, without the knowledge
> that it's in a box and without the power to get out of the box isn't going
> anywhere.
>
> All we can do is design it, build it, feed it data and watch it grow. And
>> once it exceeds our ability to design and build intelligence, it will
>> quickly outstrip our attempts to control or even understand it. At that
>> point we won't prevent it from examining the starting-point goals, values
>> and constraints we coded into it, and deciding for itself whether to adhere
>> to, modify or abandon those starting points.
>>
>
> Assuming we could design and build such a thing, which is a huge leap
> given that we haven't achieved idiot-level AI, wouldn't it be pretty
> foolish to give it unlimited knowledge and power?
>
> Once we understand that AGI will be God-like compared to us, we should be
>> able to grasp the intractability of the problem. In fact, it might be
>> helpful to adopt the term GI (for God-like Intelligence) rather than AI or
>> AGI, just to keep us mindful about what we're dealing with.
>>
>
> What exactly does "God-like" mean to you?
>
> Though I see no solution to the God-in-a-box problem, there are some steps
>> I think we as a species should take immediately: First and foremost is
>> global collaboration and coordination. Right now we're in a competitive,
>> multi-party arms race. Google, Facebook, Amazon, DARPA and China (just to
>> name a few) are racing to cross the finish line first, realizing (if not
>> publicly admitting) that the first to build a GI will win the world. From
>> that perspective it makes perfect sense to pour all available resources
>> into being first to market with an artificial God. But with stakes this
>> high, we cannot afford a winner-take-all outcome. If there is one winner
>> and 7 billion losers, no one wins.
>>
>
> If you're right we're undoubtedly screwed, because there's zero chance
> that all of the parties involved will join hands and sing Kumbaya.
>
> -Dave
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20181016/9a4c7a74/attachment.html>


More information about the extropy-chat mailing list