[ExI] free-will, determinism, crime and punishment.

Randall Randall randall at randallsquared.com
Sun Aug 26 16:49:26 UTC 2007


On Aug 26, 2007, at 10:08 AM, John K Clark wrote:

> "Randall Randall" <randall at randallsquared.com>
>
>> There are various psychological abnormalities
>> (autism, etc) that seem to point at the
>> possibility of intelligence without emotion
>
> It's odd you would use as an example an intelligence that just  
> doesn't work
> very well, but no matter, anyone who has seen an autistic person in  
> a rage
> or a panic knows that sometimes their problem is too much emotion  
> not too
> little.

A google for "disorder" and "emotionless" shows a
number of hits talking about autism, more-or-less
confirming my vague impression, here.  Some forms
of autism may well be characterized by "too much"
emotion, but at least some are the reverse.  Also,
I think we should draw a distinction between "just
doesn't work very well" and "doesn't behave socially";
these are not necessarily the same thing.

>> While this  suggests that entities of the future which
>> survive will be emotional on some level
>
> I think your statement is a bit too understated but basically I agree.

I said it that way because the idea that emotions
are useful heuristics seems to suggest that many
situations we need heuristics for will be simple
enough to make a rational decision on instead for
a greater intelligence.

We already have sets of situations that are simple
enough for humans to figure out rationally, for
which our emotions are just wrong (gambling, e.g.).


>> it doesn't seem to give any guidance on whether
>> it's possible to build AI without emotion.
>
> But then at the very least you must agree it would be easier to  
> build an AI
> with emotion than one without that "useful heuristic". And remember,
> whatever AI gets there first will win the field.

I do not agree that it would be easier to build
an AI with emotion, unless we precisely copy human
ones, for which it might not be useful.

Let us imagine building a superhuman intelligence.
Such an entity might well find all human-level
interaction solvable, and have emotions only for
more complex situations.

Here's the problem with that, though:  the only two
ways to get such emotions are by design or evolution,
and if design is possible, that means someone worked
out the (usually) correct solutions in advance for
that class of problems.  We can't do that, by definition,
since this is a superhuman intelligence, so this AI
must have gotten to those emotions by evolution, which
seems likely to be much harder than design in terms of
computing power and time.

Of course, that assumes someone has a correct theory
of intelligence when building the AI, which may well
not be the case.


>> In this case, absence of evidence is only very
>> weak evidence of absence, in my opinion.
>
> it is never a good sign if your theory cannot point to concrete  
> examples
> while a competing theory can. I can point to such examples, you  
> cannot.

Actually, I don't have a theory.  I'm not pointing
to a better theory, but instead pointing out that I
don't know that there's enough evidence to distinguish
between any theories without more evidence (that is,
more types of intelligence than just humans).


--
Randall Randall <randall at randallsquared.com>
"This is a fascinating question, right up there with whether rocks
fall because of gravity or being dropped, and whether 3+5=5+3
because addition is commutative or because they both equal 8."
   - Scott Aaronson





More information about the extropy-chat mailing list