[ExI] Canonizer 2.0

Brent Allsop brent.allsop at gmail.com
Tue Dec 25 23:26:00 UTC 2018


Hi Bill w,

Yes, you're close.  You realize that everyone's knowledge of stuff could be
different than, or completely missing from, your knowledge of stuff.

So, take for example the abstract name for the neurotransmitter: glutamate,
and our abstract descriptions of how glutamate reacts in a synapse.  Now
let's assume that science objectively demonstrates, or can't falsify the
theory that it is that glutamate physics, reacting in a synapse, that you
know of as your redness physical quality of knowledge.  And the
neurotransmitter glycine is your grenness knowledge.

Now, you need to be able to observe those two neurotransmitters, in the
correct synapses, when you, and someone else, looks at a red light.  If one
person uses glutamate to represent red, and another uses glycine to
represent red, and visa versa for green.  You can then say in an
objectively justified way, an effing statement like "Your rudeness is like
my grenness."

Does that help?

Brent


On Tue, Dec 25, 2018 at 10:58 AM William Flynn Wallace <foozler83 at gmail.com>
wrote:

>
> In other words, what is required to bridge the explanatory gap, is to
> discover which set of our abstract descriptions of physics in the brain
> should be interpreted as a redness, and a greenness physical quality, and
> so on.  Brent
>
> A trip to my audiologist was interesting:  I had a buzz in one ear from my
> hearing aid, and invited him to listen to it so he could understand what to
> correct.  He said that it would not help because he would not hear the same
> thing as I did - maybe not hear it at all.
>
> So if different people hear different things from the exact same sound
> source, it seems that being exposed to red and green things does not ensure
> that the people will see, much less experience in their brains, the same
> things.
>
> I have no background in the qualia problem as I have read being discussed
> here, so my thinking may be way off.
>
> bill w
>
> On Tue, Dec 25, 2018 at 11:02 AM Brent Allsop <brent.allsop at gmail.com>
> wrote:
>
>>
>>
>> Good questions, John.  We need to be clearer about what exactly this
>> “solution to the so-called hard problem” described in the “Representational
>> Qualia Theory” camp that has so much expert consensus is and is not:
>>
>>
>>
>> https://canonizer.com/topic/88-Representational-Qualia/6?
>>
>>
>>
>> First off, many people think the “had problem” is many different things.
>> The specific “hard problem” we are dealing with in both of these
>> canonizer.com topics is just the “explanatory gap”.  How do you know
>> what it is like to be a bat, what did Mary learn, when she experienced red
>> for the first time even though she knew, abstractly, everything about red,
>> before she experienced it for the first time?  How do you “eff the
>> ineffable” and all that.  In my opinion, this is the only hard problem.
>> Everything else falls within what David Chalmers describes as easy
>> problems.  It’s surprising how so many people think the “hard problem” is
>> something completely different than the explanatory gap, or something
>> different than the qualitative nature of consciousness problem.
>>
>>
>>
>> Second, this isn’t YET a solution to the hard problem.  It is theoretical
>> a meta approach to observing physics, in a new non-qualia-blind way (see
>> the above camp for a description of qualia blindness).  It is only a
>> prediction that if experimentalists stop being qualia blind, they will soon
>> be able to objectively detect if someone does or does not have something
>> like red / green qualia inversion.
>>
>>
>>
>> In other words, what is required to bridge the explanatory gap, is to
>> discover which set of our abstract descriptions of physics in the brain
>> should be interpreted as a redness, and a greenness physical quality, and
>> so on.  Once an experimentalist does this, we will then be able to “eff the
>> ineffable” or bridge the explanatory gap.  In other words, the prediction
>> being made in the “Representational Qualia Theory” camp needs to be
>> verified by experimentalists, as the theory predicts is about to happen,
>> before it will be a real solution to the qualitative hard problem.
>>
>>
>>
>> Does that help?
>>
>>
>>
>> On Tue, Dec 25, 2018 at 9:29 AM William Flynn Wallace <
>> foozler83 at gmail.com> wrote:
>>
>>> coming up with a theory of consciousness is easy but coming up with a
>>> theory of intelligence is not.   John Clark
>>>
>>> Just what sort of theory do you want, John?  Any abstract entity like
>>> intelligence, love, hate, creativity, has to be dragged down to operational
>>> definitions involving measurable things.  For many years the operational
>>> definition of intelligence has been the scores on an intelligence test, and
>>> of course there are many different opinions as to what tests are
>>> appropriate, meaning in essence that people differ on just what
>>> intelligence is.
>>>
>>> The problem is that it is not anything.  Oh, it is reducible in theory
>>> to actions in the brain - neurons and hormones and who knows what from the
>>> glia.  So is love those actions as well, and every other thing you can
>>> think of.  But people have generally resisted reductionism in this area.
>>> Me too, until someone can find a use for it.
>>>
>>> Look up the word 'nice' and you will find a trail of very different
>>> meanings.  Just what meaning is correct?  All of them - at least they were
>>> true at the time a particular use occurred.
>>>
>>> Intelligence is that way too - it is whatever we want to mean by the
>>> word.  Most want to use it in a way that means one thing (usually
>>> determined by factor analysis).  Some want to call it several things which
>>> may intercorrelate to some extent.  The first idea usually wins out.
>>>
>>> Whatever it is, it is the most useful test in existence because it
>>> correlates with and thus predicts more things than any other test in
>>> existence.
>>>
>>> So - the best theory is the one which predicts more things in the 'real'
>>> world than any other, and the operational definition wins.  And nobody is
>>> really happy with that.  I can't understand it.
>>>
>>> bill w
>>>
>>> On Tue, Dec 25, 2018 at 9:29 AM John Clark <johnkclark at gmail.com> wrote:
>>>
>>>> On Fri, Dec 21, 2018 at 12:08 PM Brent Allsop <brent.allsop at gmail.com>
>>>> wrote:
>>>>
>>>> >  *we've launched Canonizer 2.0.*
>>>>> *My Partner Jim Bennett just put together this video:*
>>>>>
>>>>> https://vimeo.com/307590745
>>>>>
>>>>
>>>> I notice that the third most popular topic on the Canonizer is "the
>>>> hard problem" (beaten only by theories of consciousness and God).
>>>> Apparently this too has something to do with consciousness but it would
>>>> seem to me the first order of business should be to state exactly what
>>>> general sort of evidence would be sufficient to consider the problem having
>>>> been solved. I think the evidence from biological Evolution is overwhelming
>>>> that if you'd solved the so called "easy problem" which deals with
>>>> intelligence then you've come as close to solving the "hard problem" as
>>>> anybody is ever going to get.
>>>>
>>>> I also note there is no listing at all for "theories of intelligence"
>>>>  and I think I know why, coming up with a theory of consciousness is easy
>>>> but coming up with a theory of intelligence is not. It takes years of
>>>> study to become an expert in the field of AI but anyone can talk about
>>>> consciousness.
>>>>
>>>> However I think the  Canonizer does a good job on specifying what
>>>> "friendly AI" means, in fact it's the best definition of it I've seen:
>>>>
>>>> "*It means that the entity isn't blind to our interests. Notice that I
>>>> didn't say that the entity has our interests at heart, or that they are its
>>>> highest priority goal. Those might require intelligence with a human shape.
>>>> But an SI that was ignorant or uncaring of our interests could do us
>>>> enormous damage without intending it.*"
>>>>
>>>>  John K Clark
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20181225/bbf555a6/attachment.html>


More information about the extropy-chat mailing list