[ExI] Trilemma of Consciousness

William Flynn Wallace foozler83 at gmail.com
Sat May 27 21:07:05 UTC 2017


Stuart wrote:  I don't think that it is impossible that
something could behave intelligently and not be conscious. Especially if
it were purposefully designed that way.
---------------------------------------
Trees react to their environment, sometimes releasing chemicals that tell
nearby trees of a disease or insect invasion, clearly an altruistic action
protecting their neighbors some of whom may be related.  Intelligent?
Maybe.  Conscious - not.

bill w

On Sat, May 27, 2017 at 2:43 PM, Stuart LaForge <avant at sollegro.com> wrote:

> John Clark wrote:
>
> ​> And after reading Darwin the obvious question to ask is "Evolution can't
> > directly detect consciousness any better than I can and yet I know for a
> > fact it managed to produce it at least once, how did it do that?". The
> only
> > answer is that consciousness is a unavoidable byproduct of intelligent
> > behavior.
>
> I am certain that consciousness and intelligence are correlated. After all
> intelligence is a semantic property as well. But they might be seperable.
> Like the stroke victim who is aware of everything going on around him but
> lacks the motor control with which to express "intelligent behavior".
>
> If one can be conscious but unable to behave intelligently (like the
> aforementioned stroke victim), I don't think that it is impossible that
> something could behave intelligently and not be conscious. Especially if
> it were purposefully designed that way.
>
> > I think "consciousness is the way data feels like when it is
> > being processed" is a brute fact so it is pointless to ask how or why.
>
> If you think that fact holds down to the level of a single bit flipping
> its value, then you would belong to the "consciousness is trivial"
> school.
>
> > The Turing Test has nothing to do with Turing Machines, the test is
> > ​ ​
> > agnostic as to how the subject manages to produce the observed behavior,
> > ​ ​
> > it's irrelevant.
>
> Indeed, that is the way I first envisioned it using Russell's paradox.
> That would have been a stronger result that would have applied to all
> beings and not just Turing machines. But thank Zermello and Frankel for
> screwing up set theory so that Russell's paradox is now off limits and I
> am stuck with a result restricted to Turing machines.
>
> > I don't understand, are you saying the
> > historical Turing test
> > will work for consciousness as well as intelligence, or are you proposing
> > some new test that could distinguish between a
> ​> intelligent conscious being and a intelligent
> > non-conscious being?
>
> No, I am saying that the Turing test won't be conclusive for either
> intelligence or consciousness with regard to Turing machines unless those
> are all or nothing properties of Turing machines. In other words, you run
> statistical analysis on the AI's behavior for consciousness, intelligence,
> or both and hope your AI doesn't get caught in an infinite loop.
>
>
> If so then Evolution must have used that test too, but
> > then it must be based on behavior because
> ​> behavior is​
> > what improves survival chances not consciousness, but if it's based on
> > behavior then it's just the standard vanilla
> > Turing Test.
>
> What I am saying is that due to Rice's Theorem, any Turing-like test for
> any semantic property (consciousness, intelligence, etc.) applied to a
> Turing machine is reducible to the halting problem unless Turing machines
> all share that property or are unable to possess that property. Meaning
> that all you can rely upon is faith or behaviorial statistics and hope.
>
> > Consciousness theories are a dime a dozen because unlike intelligence
> > theories there is no way to prove or disprove any of them, so I have no
> > doubt one of those theories could be used to make a consciousness test
> >fine tuned to make sure humans passed it but computers didn't (such as
> >consciousness theory#93,642: conscious beings must be squishy)
>
> Well people who think that belong to the "consciousness is null" school.
> Unfortunately my theorem does not rule that out either.
>> Stuart LaForge
>>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170527/bd6e7da6/attachment.html>


More information about the extropy-chat mailing list