[ExI] Ben Goertzel on Large Language Models

spike at rainier66.com spike at rainier66.com
Fri Apr 28 02:41:24 UTC 2023



From: Gordon Swobe <gordon.swobe at gmail.com> 
Subject: Re: [ExI] Ben Goertzel on Large Language Models


On Thu, Apr 27, 2023 at 6:51 PM spike jones via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org> > wrote:

It looks to me like GPT has intelligence without consciousness.

>…That is how it looks to me also, and to GPT-4. When asked if consciousness and intelligence are separable, it replied that the question is difficult to answer with biological systems, but...

>…"From the perspective of artificial intelligence, it is possible to create systems with high levels of intelligence that lack consciousness. AI models like mine can learn from vast amounts of data and perform complex tasks, but we do not have subjective experiences or self-awareness." - GPT4





This leads to a disturbing thought: intelligence without consciousness becomes Eliezer’s unfriendly AI.


Since I am on the topic of disturbing thoughts, I had an idea today as I was in Costco going past the item shown below.  Compare now to fifty years ago.  Some of us here may remember spring of 1973.  I do.


Imagine it is 1973 and suddenly all networked computers stopped working or began working incorrectly, such as being completely choked with spam.



Most of the things we had in 1973 would still work, as we were not heavily dependent on the internet then.


Now imagine that happening today all networked computers quit or are overwhelmed so they don’t work right.  It really isn’t as simple as returning to 1973-level technology.  We cannot do that, for we have long since abandoned the necessary skillsets and infrastructure needed to sustain society at that tech level.  If you think about the most immediate consequences, they are horrifying.  It wouldn’t take long for all the food to be gone and no more would be coming in, for the networks needed for transportation infrastructure would all be down.  Most of the population in the tech advanced civilizations would perish from starvation or violence in the resulting panicked chaos.


There are those who would see the destruction of a large fraction of humanity as a good thing: radical greens for instance.


This is what caused me to comment that humans using AI for bad ends is a more immediate existential risk than is unfriendly AI.  This unfriendly AI would not necessarily wish to destroy humanity, but an unfriendly BI will use AI, which would remorselessly participate in any nefarious plot it was asked to do. 



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/8bd1a409/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 31222 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230427/8bd1a409/attachment-0001.jpg>

More information about the extropy-chat mailing list