[ExI] Some new angle about AI.

x at extropica.org x at extropica.org
Wed Jan 6 18:20:30 UTC 2010


2010/1/6 John Clark <jonkc at bellsouth.net>:
> On Jan 6, 2010, Aware wrote:

> The trouble with all these discussions is that people point to things and
> say, look at that (computer made of beer cans, Chinese room, ameba, or
> whatever) and say that's intelligent but *OBVIOUSLY*  it's not conscious;
> but it is not obvious at all and in fact they have absolutely no way of
> knowing it is true.

I agree that consciousness (self-awareness) is not obvious, and can
only be inferred.  By definition.

It seems to me that you're routinely conflating "intelligence" and
"consciousness", but then, oddly, you distinguish between them by
saying one is much easier than the other.

I AGREE that in terms of the evolutionary process that lead to the
emergence of intelligence and then consciousness (self-awareness) on
this planet, that the evolution of "intelligence" was a much bigger
step, requiring a lot more time, than the evolution of consciousness,
which is like just an additional layer of supervision.


> If you show me something and call it "intelligent" then
> I can immediately call it conscious and don't even need to express
> reservations on the use of the word with quotation marks as you did because
> we learned from the history of Evolution that consciousness is easy but
> intelligence is hard.

So why don't you agree with me that intelligence must have "existed"
(been recognizable, if there had been an observer) for quite a long
time before evolutionary processes stumbled upon the additional,
supervisory, hack of self-awareness?


>> when pinned down John appears to go to either limit:  Mr. Jupiter Brain
>> wouldn't be very smart if he didn't model himself
>
> Yes.
>
>> or the other (panpsychist) view that even an amoeba has consciousness, but
>> just an eensy teensy bit.
>
> If an amoeba is a eensy bit intelligent then it's two eensy bits conscious.
>  John K Clark

It doesn't (seem, to me) to follow at all that if an amoeba can be
said to be intelligent (displays behaviors of effective prediction and
control appropriate to its environment of adaptation) that it can
necessarily be said to be conscious (exploits awareness of its own
states and actions.) That seems to me to be an additional layer of
supervisory functionality that isn't implemented in the relatively
simple structure of the amoeba.

You're asserting a continuous QUANTITATIVE scale of consciousness,
from the amoeba (and presumably below) up to Mr. Jupiter Brain (and
presumably beyond.)

I'm asserting ongoing, punctuated, QUALITATIVE developments, with
novel hacks like self-awareness discovered at some point, exploited
for the additional fitness they confer, and eventually superseded by
even newer hacks providing greater benefits over greater scope of
interaction.  I fully expect that self-awareness will eventually be
superseded by a fractal form of hierarchical awareness.

- Jef



More information about the extropy-chat mailing list