[ExI] Ben Goertzel on Large Language Models

Jason Resch jasonresch at gmail.com
Fri Apr 28 22:14:54 UTC 2023


On Fri, Apr 28, 2023, 12:46 AM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I used to believe that consciousness is fundamental because of my interest
> in Eastern philosophy. But it is a lie.
>

I would say that consciousness is not the most fundamental aspect of
reality (I would put Truth in that category, and from truth, next we get
numbers and their mathematical relations). From mathematical relations we
get computations, and from computations, consciousness.

So then there is a real sense in which consciousness is more fundamental
than physics. Apparent physical universes, and their laws and properties
emerge from the psychology of Turing machines. Or put another way: physical
reality is what platonic conscious computations dream.

This is a form of idealism, but one structured by mathematical laws and the
probability distribution as defined by algorithmic information theory, and
an idealism that defines the relation between the three modes of existence:
the mathematical, material, and mental realities.

Jason



> Giovanni
>
> On Thu, Apr 27, 2023 at 9:34 PM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Quite by accident, I happened upon this quote of Erwin Schrodinger this
>> evening.
>>
>> "Consciousness cannot be explained in physical terms. Because
>> consciousness is absolutely fundamental. It cannot be explained in any
>> other terms."
>>
>> That is actually what I also hold to be true about consciousness, though
>> not necessarily for reasons related to quantum mechanics or eastern
>> philosophy. (Schrodinger is said to have been influenced by
>> eastern philosophy).
>>
>> -gts
>>
>> -gts
>>
>> On Thu, Apr 27, 2023 at 8:43 PM spike jones via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>>
>>>
>>> *From:* Gordon Swobe <gordon.swobe at gmail.com>
>>> *Subject:* Re: [ExI] Ben Goertzel on Large Language Models
>>>
>>>
>>>
>>> On Thu, Apr 27, 2023 at 6:51 PM spike jones via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>> It looks to me like GPT has intelligence without consciousness.
>>>
>>>
>>> >…That is how it looks to me also, and to GPT-4. When asked if
>>> consciousness and intelligence are separable, it replied that the question
>>> is difficult to answer with biological systems, but...
>>>
>>> >…"From the perspective of artificial intelligence, it is possible to
>>> create systems with high levels of intelligence that lack consciousness. AI
>>> models like mine can learn from vast amounts of data and perform complex
>>> tasks, but we do not have subjective experiences or self-awareness." - GPT4
>>>
>>> -gts
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> This leads to a disturbing thought: intelligence without consciousness
>>> becomes Eliezer’s unfriendly AI.
>>>
>>>
>>>
>>> Since I am on the topic of disturbing thoughts, I had an idea today as I
>>> was in Costco going past the item shown below.  Compare now to fifty years
>>> ago.  Some of us here may remember spring of 1973.  I do.
>>>
>>>
>>>
>>> Imagine it is 1973 and suddenly all networked computers stopped working
>>> or began working incorrectly, such as being completely choked with spam.
>>>
>>>
>>>
>>>
>>>
>>> Most of the things we had in 1973 would still work, as we were not
>>> heavily dependent on the internet then.
>>>
>>>
>>>
>>> Now imagine that happening today all networked computers quit or are
>>> overwhelmed so they don’t work right.  It really isn’t as simple as
>>> returning to 1973-level technology.  We cannot do that, for we have long
>>> since abandoned the necessary skillsets and infrastructure needed to
>>> sustain society at that tech level.  If you think about the most immediate
>>> consequences, they are horrifying.  It wouldn’t take long for all the food
>>> to be gone and no more would be coming in, for the networks needed for
>>> transportation infrastructure would all be down.  Most of the population in
>>> the tech advanced civilizations would perish from starvation or violence in
>>> the resulting panicked chaos.
>>>
>>>
>>>
>>> There are those who would see the destruction of a large fraction of
>>> humanity as a good thing: radical greens for instance.
>>>
>>>
>>>
>>> This is what caused me to comment that humans using AI for bad ends is a
>>> more immediate existential risk than is unfriendly AI.  This unfriendly AI
>>> would not necessarily wish to destroy humanity, but an unfriendly BI will
>>> use AI, which would remorselessly participate in any nefarious plot it was
>>> asked to do.
>>>
>>>
>>>
>>> spike
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/fe210b0d/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 31222 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/fe210b0d/attachment.jpg>


More information about the extropy-chat mailing list