[ExI] Canonizer 2.0

John Clark johnkclark at gmail.com
Tue Dec 25 15:24:24 UTC 2018


On Fri, Dec 21, 2018 at 12:08 PM Brent Allsop <brent.allsop at gmail.com>
wrote:

>  *we've launched Canonizer 2.0.*
> *My Partner Jim Bennett just put together this video:*
>
> https://vimeo.com/307590745
>

I notice that the third most popular topic on the Canonizer is "the hard
problem" (beaten only by theories of consciousness and God). Apparently
this too has something to do with consciousness but it would seem to me the
first order of business should be to state exactly what general sort of
evidence would be sufficient to consider the problem having been solved. I
think the evidence from biological Evolution is overwhelming that if you'd
solved the so called "easy problem" which deals with intelligence then
you've come as close to solving the "hard problem" as anybody is ever going
to get.

I also note there is no listing at all for "theories of intelligence"  and
I think I know why, coming up with a theory of consciousness is easy but
coming up with a theory of intelligence is not. It takes years of study to
become an expert in the field of AI but anyone can talk about consciousness.

However I think the  Canonizer does a good job on specifying what "friendly
AI" means, in fact it's the best definition of it I've seen:

"*It means that the entity isn't blind to our interests. Notice that I
didn't say that the entity has our interests at heart, or that they are its
highest priority goal. Those might require intelligence with a human shape.
But an SI that was ignorant or uncaring of our interests could do us
enormous damage without intending it.*"

 John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20181225/e56d5418/attachment.html>


More information about the extropy-chat mailing list