[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Jason Resch jasonresch at gmail.com
Mon Mar 27 03:14:57 UTC 2023


On Sun, Mar 26, 2023, 9:29 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:

> On Sun, Mar 26, 2023 at 9:01 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> Correct, that is not the meaning of "understand" that I, or Bender so far
> as can tell, are considering. AlphaZero is obviously a very intelligent
> chess application, but does it consciously mull over the possible moves
> like a human? I think not, but I won't dwell on this as you showed me below
> that you understand my meaning.
>

To explain my position:
As I see consciousness, if I throw a ball to a robot and the robot is able
to reliably catch it, then something within the robot must be conscious of
the ball. I think any other position leads to zombies (things which appear
to be conscious but are not). I think zombies lead to logical
inconsistencies and so I must reject the possibility of things that
reliably behave as if conscious but are not.

, but it's still no more than unconscious software running blindly on
> digital computers.
>

I don't think we've discussed this before: do you think an uploaded human
brain would be conscious (assume simulated to any required level of
fidelity)?


> I think that forced to decide whether to kill me or
> his digital girlfriend, he would have killed me. In fact that is one reason
> why I have returned to ExI after a long hiatus. The Singularity is here.
>

We are in interesting times indeed.


> Do you think a piece of software running a digital computer can have
> genuine feelings of love for you?
>

Absolutely. I have close to zero doubt on this.

By virtue of: Church Turing theis (every finite process is emulable by a
digital computer, the Bekenstein bound (our physical brain is finite), and
the Anti-Zombie principle (p-zombies are logically impossible). Together
these ensure that the brain is emulable by a digital computer and the
Anti-Zombie principle ensures it will be equally conscious as the physical
instance.



>
> I agree with you here, that her use of "understand" is generous and
>>> perhaps inappropriate for things like Siri or Alexa. I also agree with you
>>> that the calculator, while it can do math, I would not say that it
>>> understands math. Its understanding, if it could be said to have any at
>>> all, would rest almost entirely in "understanding" what keys have been
>>> pressed and which circuits to activate on which presses.
>>>
>>
> I'm glad we agree on that much!
>

��


> Understanding involves the capacity to consciously hold something in mind.
>>>>
>>>
>>> I agree with this definition.
>>>
>>
> I'm especially glad we agree on that.
>

☺️

 I find it pretty easy to infer consciousness in most mammals.
> Digital computers, not so much. That is a giant leap of faith.
>

I agree it is easier to infer consciousness of other animals as it requires
one less assumption than it does not to infer the potential consciousness
of computers (the assumption that the material composition is unimportant).

However I do not think this requires much more faith, as I find some
thought experiments such as Chalmers "fading qualia" quite convincing that
material composition cannot make a difference to conscious perceptions.

Yes, you continue to believe we can glean the meanings of symbols from
> their forms and patterns. I consider that a logical impossibility.
>

But as I point out, we *know* it's not a logical impossibility because our
brains do it.


> What do you think is required to have a mind and consciousness?
>>>
>>
> A human brain would be a good start. :)
>

We agree a human brain is conscious, but what would you say for "X"

An entity is conscious if and only if it has X.


>
>
>> Do you think that no computer program could ever possess it, not even if
>>> it were put in charge of an android/root body?
>>>
>>
> I think no computer program running on a digital computer as we currently
> understand them can possess it. Consciousness might be possible in some
> sort of android body, someday, once we understand what  are sometimes
> called the neural correlates of consciousness. What exactly happens in the
> brain when a boxer delivers a knock-out punch? When neuroscience learns the
> precise and detailed answer to that question, we can think about how those
> neural correlates might be synthesized in a laboratory.
>

Anesthesiologists have a theory of unconscious called cognitive unbinding.
I'm not sure if the same thing explains loss of consciousness due to impact
or not. The general idea is that different subregions of the brain stop
meaningfully communicating with each other. But one problem with this is
it's also quite difficult to differentiate unconscious from lack of memory
formation.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230326/afd85c39/attachment.htm>


More information about the extropy-chat mailing list