[ExI] all we are is just llms

Jason Resch jasonresch at gmail.com
Sun Apr 23 03:08:05 UTC 2023


On Sat, Apr 22, 2023, 7:22 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> *Ja, Ben where I was really going with that idea is exploring whether it
> is possible to separate consciousness from intelligence*
> GPT-4:
> Consciousness:
> Consciousness refers to an individual's subjective experience, awareness,
> and perception of their environment, thoughts, and feelings. It is the
> state of being conscious, or awake and responsive to one's surroundings.
> Consciousness is often associated with self-awareness, the ability to
> reflect on one's thoughts, emotions, and actions. The nature of
> consciousness, its origin, and how it arises from the brain are still
> subjects of ongoing debate and research.
>
> Intelligence:
> Intelligence, on the other hand, refers to the ability to acquire,
> process, and apply knowledge and skills. It involves various cognitive
> functions such as reasoning, problem-solving, abstract thinking, learning,
> and adaptation to new situations. Intelligence can be measured and
> evaluated using standardized tests like IQ tests, although it is a complex
> and multi-dimensional concept that goes beyond a single score. It is often
> seen as a general mental ability that enables an individual or an
> artificial system to effectively interact with the environment and solve
> problems.
>
> Giovanni (GPT-4 is my assistant if you didn't know):
>
> Intelligence and consciousness are related but separate concepts. But they
> are fuzzy concepts and they overlap quite a bit.
>

I believe consciousness is a necessary component of any intelligent
process. (See attached image) the perceptions represent the mind processing
and interpreting information from the environment so that it can l
determine an (intelligent) action to take. Without this input and
processing there can be no intelligence, as the mind would be "flying
blind", performing actions randomly without input from the environment.

> I think the main interesting question is if you can have a very
> intelligent system without being conscious or a conscious system that is
> not very intelligent.
>

You can have a very intelligent process with minimal consciousness. For
example, AlphaGo is more intelligent than any human (when it comes to Go)
but it's awareness is quite limited, perhaps limited to a few hundred bits
of input representing the board state, and recent sequence of moves (though
maybe it also has additional consciousness related to what moves it likes
and dislikes).

You can also have a highly conscious process with minimal or no
intelligence. For example a human brain who is "locked in" can be very
conscious, the perception side of the intelligence loop is still working,
but since this person is totally paralyzed they are unable to perform any
intelligent actions and thus are not intelligent (at least under the agent
environment interaction model of intelligence).

>
> Some people attribute a low level of consciousness to almost anything that
> reacts to the environment, even passively. If I sit and I perceive a
> strawberry and I'm aware of this perception I'm conscious. The entire bs of
> qualia is focused on this supposed mystery and it is used as a fundamental
> conundrum that is the key or at least a fundamental piece of the puzzle to
> understanding consciousness.
>

I think there is a genuine mystery related to qualia, but that we can
explain why qualia are incommunicable and unexplainable in terms similar to
what leads to Godelian incompleteness. I agree with you that we shouldn't
get hung up on this problem, as it is in a sense, the probably unsolvable
part of the mystery of consciousness.

To me, that is a trivial and not interesting phenomenon that is not at all
> the core of what consciousness is. At least the kind of consciousness that
> is interesting and that we are mostly fascinated by as humans.
>
> We can also say that some expert system that can interpret data and make
> models automatically to make predictions of possible outcomes in a narrow
> field of expertise is an "intelligent system".
>
>
> This why a lot of the debate about consciousness and intelligence is
> around AGI, or systems that are not intelligent in a specific domain but
> systems that figure out intelligence as a general way to interpret and
> analyze information and make predictive models of the world that INCLUDE
> the system itself. Consciousness is this process of seeing oneself in these
> auto-generated models of the world.
>
I would call that self-consciousness / self-awareness, which I consider a
subclass of consciousness / awareness.

I think many animals, machines, and even humans at certain times are simply
conscious / aware, and only become self-conscious / self-aware under
particular circumstances.

So intelligence is the ability to make models from data and higher
> consciousness is the ability to see oneself as an agent in these predictive
> models.
>
> The most interesting part of consciousness is the individuation aspect and
> the process of its transcendence. The ability to identify as an integrated,
> self-knowing entity and the related ability to expand this identification
> to other sentient beings and see the parallel and connection between these
> beings both at the intellectual but also experiential level.
> Intelligence and in fact, wisdom are important aspects of this type of
> consciousness because it requires being able to see patterns, correlation,
> and causation between different levels of internal and external reality.
> Primates have developed this type of consciousness because of the complex
> social structures they live in that requires a deep theory of mind, an
> empirically-based moral order of the world, and a sense of compassion
> (supported by the activation of mirror neurons) and in fact, even love.
>
> Artificial Intelligences that are trained on a vast collection of human
> data have developed a theory of mind because it is impossible to make sense
> of language without it. Developing a theory of mind is a component of what
> is required to have that higher level of consciousness, I think on the base
> of this alone we can declare GPT-4 has some form of higher consciousness
> (although incomplete).
>
Perhaps it is even higher than that of humans. It's certainly more
knowledgeable than any human who's ever lived.

This will become more of a question as the number of parameters in it's
brain begins to exceed the number of neural connections in the human brain
(which I believe is only a few orders of magnitude away, perhaps reachable
in a couple of years).

There are other things that are missing like a continuous loop that would
> allow GPT-4 to reflect on these theories and its internal status (the
> equivalent of feelings) reacting to them (GPT-4 it will tell you it has no
> opinion or feeling but then it goes ahead and provides what it considers
> the best course of action regarding a social situation for example). These
> loops are not there by design.
>
There is at least one loop that is part of it's design: once GPT outputs
some symbols that output is fed back in as input to the next cycle of
generation. Thus to answer a single prompt this might happen dozens or
hundreds of times.

If the model were asked to consider what is the source of these symbols it
is seeing generated, the only correct answer it could give would have to
involve some kind of self-reference. Asking GPT "who generated that output
text?" is like asking a human "who moved your arm?", you may not consider
it until asked, but you have to answer "I" -- "I generated my output text"
or "I moved my arm."


GPT-4 is in a sense a frozen form of consciousness without these loops.
>
Our own perception of time and motion is in a sense a fabrication. There
was a woman who after damage to the V5 part of her visual cortex could no
longer perceive motion. Everything she saw was like a static frame. It's a
condition known as akinetopsia or motion blindness. She found pouring tea
to be especially difficult “because the fluid appeared to be frozen, like a
glacier” and she didn't know when to stop pouring.

Given this, it's not immediately obvious whether GPT does or does not
perceive time as continuous. It seems humans can be made to experience
frozen moments of time rather than continuous motion. Perhaps GPT could be
made to perceive or not perceive motion in a similar way, regardless of the
architecture or presence of loops.



> These loops can be added easily externally via different applications like
> Auto-GPT for example. If one could build such a system that could reflect
> and correct its own status on a continuous basis it will be a truly
> conscious system and we will have achieved AGI.
>

Imagine we took GPT-4 back to 1980 or 1960. Is there any doubt people of
that time (including AI researchers) would consider GPT-4 an AGI?

We are not there yet but we are close. The real excitement in the latest
> development in AI is not if the current form of GPT-4 is conscious or not
> but the obvious fact to most of us that AGI is achievable with known
> methods and it is just a matter of putting all the existing pieces together.
>
I think we're very close to eclipsing the best humans in every domain of
mental work. Currently we still have a few areas where the best humans
outclass AI. Today AI beats the average human in nearly every domain, and
is superhuman in a great number of areas.

I agree no new theoretical advances are required to get there from today.
It's just a matter of more integration and more scaling.

Jason


>
>
>
>
> On Sat, Apr 22, 2023 at 3:16 PM Sherry Knepper via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Does emotional intelligence count?
>>
>> Sent from Yahoo Mail on Android
>> <https://go.onelink.me/107872968?pid=InProduct&c=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=Internal&af_sub2=Global_YGrowth&af_sub3=EmailSignature>
>>
>> On Fri, Apr 21, 2023 at 4:31 AM, Ben Zaiboc via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>> On 21/04/2023 06:28, spike wrote:
>>
>> Regarding measuring GPT’s intelligence, this must have already been done
>> and is being done.  Reasoning: I hear GPT is passing medical boards exams
>> and bar exams and such, so we should be able to give it IQ tests, then
>> compare its performance with humans on that test.  I suspect GPT will beat
>> everybody at least on some tests.
>>
>>
>>
>> Yeah, but don't forget, spike, they just have *simulated* understanding
>> of these things we test them for. So the test results are not really valid.
>> That will include IQ tests. No good. Simulated intelligence, see?
>>
>> Ben
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/8d5722b1/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: agent-environment-interaction-e1591209099917.png
Type: image/png
Size: 14735 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/8d5722b1/attachment.png>


More information about the extropy-chat mailing list