[ExI] More thoughts on sentient computers

Giovanni Santostasi gsantostasi at gmail.com
Thu Feb 23 06:55:54 UTC 2023


Brent,
Did you make the video you linked? Is that you in the video?
I'm listening to it and I have strong reactions to it, while I'm having my
dinner, lol.
It says somewhere: "We could expose a digital device to a strawberry and
the computer can recognize it but it does so because it makes a digital
abstraction of the strawberry but not because it has a subjective
experience of the strawberry". I'm paraphrasing but something like that.
That digital abstraction is the EXPERIENCE !!!!
And our experience is an abstraction, it is !!!
There is no difference.
The only difference is that our brain is self referential so we "aware" of
this experience but that can be built easily in a digital device and it
will be done soon if it has been done in some lab (I'm sure there is some
very advanced form of AI that is not even revealed to the public, like
LaMDA described by Lemoine was not).
If there is anything to learn from ChatGPT is that simple neural networks
(ok deep ones with several hidden layers and so on) and relatively simple
approaches to AI can give rise to incredible complex behavior and emergent
properties. The sensation of red is simply what happens when the brain
communicates to itself that it is experiencing something. What is
mysterious about that? The language of the brain is physical sensations, it
is a physical system, how is it supposed to communicate info itself? I do
ask you what would you expect instead? What does an explanation that
includes the solution to this "mystery" (that is not) is supposed to look
like? Like the sensation itself? You already have it. You have the
territory already. The map will never have the wetness of the river it
tries to represent and if it does it is a very bad map.
Giovanni








On Wed, Feb 22, 2023 at 9:16 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Giovanni,
>
> On Wed, Feb 22, 2023 at 5:28 PM Giovanni Santostasi via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Brent you are fixated with Qualias.
>> Qualia is one of the silliest idea ever.
>>
>
> I admit the term Qualia is silly, and SOOO misleading, since what we're
> talking about are real physical intrinsic color qualities, themselves.  Physicists
> Don't Understand Color
> <https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0>
> .
> For example, let me ask you this: What is it in this world that has a
> redness quality (What behaves the way it does, because of its redness
> quality)?  Nobody, including you, knows that most simple and most
> fundamental question about the nature of colored physical reality.
> All you know of things is the color they seem to be, all falsely colored
> by your brain for various different reasons to make us more intelligent and
> motivated.  (i.e. our brain wants the red things to stand out, so it
> represents them with a particular physical quality)
> Consciousness isn't the problem.  What color are things is the real
> problem.
>
> Consciousness isn't a 'Hard Problem' it's a color problem
> <https://canonizer.com/videos/consciousness?chapter=Representational_Qualia_Theory_Consensus>
> .
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/4cef6455/attachment.htm>


More information about the extropy-chat mailing list