[ExI] Fwd: Fwd: Chalmers

William Flynn Wallace foozler83 at gmail.com
Fri Dec 20 17:29:23 UTC 2019


Taking your statement literally:  you close your eyes and remember red.  I
can close mine and 'see' red or any other color or shape or person etc.
Do you not?   bill w

On Fri, Dec 20, 2019 at 10:08 AM John Clark via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Thu, Dec 19, 2019 at 9:17 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> Hi Brent:
>
> *> I Completely agree with everything else you are saying including what
>> you are saying about volition, choice…, You’ve even converted me to the
>> camp that says the term AGI is bad.  Long live the term AI!*
>>
>
> So we true believers must band together and punish the AGI heretics who
> deface the sacred name of AI and....sorry sorry.... I get a little carried
> away sometimes.
>
> >> “I don't see how such a neural ponytail could falsify solipsism”
>>
>>
>>
>> *> Are you saying that the left hemisphere of your brain does not know,
>> absolutely, that it is NOT the only hemisphere in existence? *
>>
>
> Yes. Homo sapiens existed for at least a 100 thousand years before they
> consciously knew the brain was important much less that the brain had
> hemispheres; the ancient Egyptians carefully preserved every organ in the
> body EXCEPT for the brain which they thought was just humdrum packing
> material that did nothing but hinder the mummification process, so they
> pulled the brain out of the head through the nostrils with an iron hook and
> discarded the resulting mess. Alcor is somewhat more careful because we've
> learned over the centuries but it hasn't been easy because the brain does
> not come with any built in knowledge about the brain.
>
> And unless your corpus callosum that connects your brain hemispheres has
> been surgically severed your left hemispheres does not currently know what
> it would be like to be a left hemisphere unconnected to the right
> hemisphere.
>
>
>> > These twins
>> <https://www.cbc.ca/cbcdocspov/features/the-hogan-twins-share-a-brain-and-see-out-of-each-others-eyes?fbclid=IwAR24WaZJuZFucfU41DRX3zgmuWwYhiYL2MfINsctVKgah4PFLoCtL_Zx4Fga2LwRIxXHI9-jg5bX7WvaHeuiloko3WZ-jfzl3i1Cdbxv3HfHZCNhyg>
>> know, directly and absolutely that their twin’s brain and consciousness
>> exist, don’t you think?
>>
>
> I don't know, it depends on how large the bandwidth between the 2 brains
> is. I know a little (a very little) about how your brain works just by the
> trickle of information that comes from your Emails, but if our brains were
> linked by a fiber optic cable of huge capacity and we were close enough
> that signal delays were not important and if every thought you had I had,
> and every thought I had you had, it would be meaningless to say that you
> and I were 2 seperate people. I don't know the details of the twin's case
> but I doubt the bandwidth is that large.
>
>
>> >> “you wouldn't be Brent Allsop anymore, you'd be John Clark.”
>>
>>
>>
>> *> When I’m talking about effing the ineffable, I’m only talking about at
>> the elemental redness level,*
>>
>
> When I close my eyes I can remember what redness is like, when the
> Clark/Allsop hybrid closes his eyes he remembers something called redness
> but is he remembering Clark's redness or Allsop's redness or neither?
> Perhaps he's remembering both, perhaps what Clark would call red Allsop
> would call green, so when the Clark/Allsop hybrid is thinking about "red"
> he is thinking about yellow.
>
> > *It’s as simple as the abstract word red isn’t red.  You need a
>> dictionary to know what red means. *
>>
>
> You can find out what the wavelength of red is from one but nobody learns
> what the qualia "redness" means from a dictionary, they learn it from
> examples. Before you learned how to read somebody pointed to a tomato and
> said "red" then they pointed to a strawberry and said "red", you figured
> out that the two things had something in common and learned what "redness"
> signified.  A AI would also learn from examples not from a dictionary.
> Dictionaries and definitions are just not fundamentally important, all
> definitions in a dictionary are made of words that all have definitions
> also made of words and round and round we go. The only thing that can break
> out of that infinite loop and give meaning to language is examples,
> somebody points to a tall green thing and says "tree" and you get the idea.
>
>
>> > I can't see how Darwinian Evolution managed to come up with a conscious
>>> creature like me
>>>
>>
>
>
>> *> Darwinian evolution decided to run your consciousness directly on
>> physical qualities,*
>>
>
> Why does Darwinian evolution care about consciousness or even know that
> such a thing exists?
>
>
>> * > because it is more efficient and it didn’t need the extra hardware
>> required to make you substrate independent.*
>>
>
> So you think it would be difficult to make a super intelligent computer
> that was NOT conscious but easier to make a super intelligent computer
> that WAS conscious. So by Occam's razor if you ever run across a super
> intelligent computer your default position should be that it is conscious,
> and you'd better hope the AI feels the same way about you.
>
> John K Clark
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20191220/539259cb/attachment.htm>


More information about the extropy-chat mailing list