[ExI] Canonizer 2.0

William Flynn Wallace foozler83 at gmail.com
Tue Dec 25 16:26:48 UTC 2018


coming up with a theory of consciousness is easy but coming up with a
theory of intelligence is not.   John Clark

Just what sort of theory do you want, John?  Any abstract entity like
intelligence, love, hate, creativity, has to be dragged down to operational
definitions involving measurable things.  For many years the operational
definition of intelligence has been the scores on an intelligence test, and
of course there are many different opinions as to what tests are
appropriate, meaning in essence that people differ on just what
intelligence is.

The problem is that it is not anything.  Oh, it is reducible in theory to
actions in the brain - neurons and hormones and who knows what from the
glia.  So is love those actions as well, and every other thing you can
think of.  But people have generally resisted reductionism in this area.
Me too, until someone can find a use for it.

Look up the word 'nice' and you will find a trail of very different
meanings.  Just what meaning is correct?  All of them - at least they were
true at the time a particular use occurred.

Intelligence is that way too - it is whatever we want to mean by the word.
Most want to use it in a way that means one thing (usually determined by
factor analysis).  Some want to call it several things which may
intercorrelate to some extent.  The first idea usually wins out.

Whatever it is, it is the most useful test in existence because it
correlates with and thus predicts more things than any other test in
existence.

So - the best theory is the one which predicts more things in the 'real'
world than any other, and the operational definition wins.  And nobody is
really happy with that.  I can't understand it.

bill w

On Tue, Dec 25, 2018 at 9:29 AM John Clark <johnkclark at gmail.com> wrote:

> On Fri, Dec 21, 2018 at 12:08 PM Brent Allsop <brent.allsop at gmail.com>
> wrote:
>
> >  *we've launched Canonizer 2.0.*
>> *My Partner Jim Bennett just put together this video:*
>>
>> https://vimeo.com/307590745
>>
>
> I notice that the third most popular topic on the Canonizer is "the hard
> problem" (beaten only by theories of consciousness and God). Apparently
> this too has something to do with consciousness but it would seem to me the
> first order of business should be to state exactly what general sort of
> evidence would be sufficient to consider the problem having been solved. I
> think the evidence from biological Evolution is overwhelming that if you'd
> solved the so called "easy problem" which deals with intelligence then
> you've come as close to solving the "hard problem" as anybody is ever going
> to get.
>
> I also note there is no listing at all for "theories of intelligence"  and
> I think I know why, coming up with a theory of consciousness is easy but
> coming up with a theory of intelligence is not. It takes years of study to
> become an expert in the field of AI but anyone can talk about consciousness.
>
> However I think the  Canonizer does a good job on specifying what
> "friendly AI" means, in fact it's the best definition of it I've seen:
>
> "*It means that the entity isn't blind to our interests. Notice that I
> didn't say that the entity has our interests at heart, or that they are its
> highest priority goal. Those might require intelligence with a human shape.
> But an SI that was ignorant or uncaring of our interests could do us
> enormous damage without intending it.*"
>
>  John K Clark
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20181225/b79cee20/attachment.html>


More information about the extropy-chat mailing list