[ExI] What is Consciousness?

Jason Resch jasonresch at gmail.com
Wed Mar 22 19:20:49 UTC 2023


On Tue, Mar 21, 2023 at 8:41 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Thanks, Jason, for this great thread, and the various expert views you
> provided.
>

Thank you. I am glad you liked it. :-)


> It'd be so great to see if we could get all those experts to support a
> concise statement about what they all agree consciousness is, so we could
> track that, and see how much consensus there was for the best ideas they
> all agree on, and track how much consensus the various competing ideas
> achieve over time, and so on.
> THAT is exactly what we are doing in the Representational Qualia Theory
> petition / emerging consensus camp statement, a growing number of experts
> are now supporting and helping to improve, recruit new supporters and so on.
>

Aren't consciousness researchers already doing this, through papers,
conferences, journals, books, etc.?


>
> Basically, all the 45 supporters of Representational Qualia Theory
> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>
> are putting forth the idea that what all those experts are saying about
> consciousness is missing the point, since everything they are talking about
> applies to both abstract systems (which do computational binding of
> information via discrete logic gates in a CPU) and a phenomenal system that
> is like something, since it is running directly on computationally bound
> phenomenal qualities.
>
> If one first understands how conscious awareness of color works, how
> 'redness' is not a quality of the strawberry, it is a property of our
> knowledge of the strawberry, then you can take that basic qualitative
> understanding and better understand the rest of consciousness, even though
> all the rest of consciousness and thinking (which all the experts you
> referenced are talking about) is quite different than just the perception
> of color.  If you can understand the basic idea of how our knowledge of red
> things is represented by a redness quality, and you can clearly understand
> how this is very different than the way an abstract system just uses the
> word 'red' (requires a dictionary) to represent knowledge of red things
> with, then you can take the general idea of conscious knowledge being like
> something,
>

Have you read "Color for Philosophers: Unweaving the Rainbow" by Clyde
Hardin?



> and this is what is most important about what consciousness is, and how
> this is very different than the kind of abstract computation computers do.
>

Have you written a computer program before? How would you characterize the
limits of computation or the limits of machines?

If this is a topic that is outside your domain of expertise I would
recommend the book "Pattern on the Stone", it is written to explain
computers and computation to non-computer scientists. Here are some
passages of particular interest to the current topic:

    The theoretical limitations of computers provide no useful dividing
line between human beings and machines. As far as we know, the brain is a
kind of computer, and thought is just a complex computation. Perhaps this
conclusion sounds harsh to you, but in my view it takes away nothing from
the wonder of human thought. The statement that thought is a complex
computation is like the statement sometimes made by biologists that life is
a complex chemical reaction: both statements are true, and yet they still
may be seen as incomplete. They identify the correct components but they
ignore the mystery. To me, life and thought are both made all the more
wonderful by the realization that they emerge from simple, understandable
parts. I do not feel diminished by my kinship to Turing's machine. [...]
    Most people are interested in not so much the practical moral questions
of a hypothetical future as the philosophical issues that the mere
possibility of an artificial intelligence raises about ourselves. Most of
us do not appreciate being likened to machines. This is understandable: we
ought to be insulted to be likened to stupid machines, such as toasters and
automobiles, or even today's computers. Saying that the mind is a relative
of a current-generation computer is as demeaning as saying that a human
being is related to a snail. Yet both statements are true, and both can be
helpful. Just as we can learn something about ourselves by studying the
neural structure of the snail, we can learn something about ourselves by
studying the simple caricature of thought within today's computers. We may
be animals, but in a sense our brain is a kind of machine.

    Many of my religious friends are shocked that I see the human brain as
a machine and the mind as a computation. On the other hand, my scientific
friends accuse me of being a mystic because I believe that we may never
achieve a complete understanding of the phenomenon of thought. Yet I remain
convinced that neither religion nor science has everything figured out. I
suspect consciousness is a consequence of the action of normal physical
laws, and a manifestation of a complex computation, but to me this makes
consciousness no less mysterious and wonderful--if anything, it makes it
more so. Between the signals of our neurons and the sensations of our
thoughts lies a gap so great that it may never be bridged by human
understanding. So when I say that the brain is a machine, it is not meant
as an insult to the mind but as an acknowledgement of the potential of a
machine. I do not believe that a human mind is less than we imagine it to
be, but rather that a machine can be much, much more.
--  Danny Hillis in "Pattern on the Stone
<https://archive.org/details/patternonstonesc00wdan>" - (1998)


Jason


> If anyone disagrees with this, or thinks there is a better way to think
> about, and/or define what is or isn't conscious, they should start a
> competing camp stating such, so other experts can chime in.  May the best
> theories achieve the most consensus.
>
>
>
>
>
>
>
> On Tue, Mar 21, 2023 at 9:22 AM Gadersd via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> ChatGPT and I even came up with a scheme on how to do that and making
>> different instances analyze the output and correct or improve it. It would
>> be relatively easy to create such self recurrence. I did even some simple
>> experiments to achieve that. For example you can ask ChatGPT to create a
>> room of philosophers and debate themselves.
>>
>>
>> Anthropic AI actually had their own language model generate some of its
>> own training data. Claude critiqued its own responses thereby improving
>> their truthfulness and alignment. The technique of using AI to train AI is
>> already underway.
>>
>> On Mar 21, 2023, at 2:05 AM, Giovanni Santostasi via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>> Spike,
>> I actually had this discussion with chatGPT about having not even
>> different AI but different instances of ChatGPT itself interacting and
>> regulating each other.
>> ChatGPT and I even came up with a scheme on how to do that and making
>> different instances analyze the output and correct or improve it. It would
>> be relatively easy to create such self recurrence. I did even some simple
>> experiments to achieve that. For example you can ask ChatGPT to create a
>> room of philosophers and debate themselves.
>> Notice that the version of LaMDA that Lemoine (the Google engineer that
>> claimed LaMDA is conscious) tested and discussed was a meta version that is
>> charged with coordinating all the different personalities of LaMDA. That is
>> exactly what is needed for AGI, the Strange Loop, it is ripe for emergent
>> phenomena like consciousness.
>> Giovanni
>>
>> On Sun, Mar 19, 2023 at 12:01 PM spike jones via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>>
>>>
>>> *From:* extropy-chat <extropy-chat-bounces at lists.extropy.org> *On
>>> Behalf Of *Jason Resch via extropy-chat
>>> *…*
>>>
>>>
>>>
>>> >…We see recurring themes of information, recursion, computation, and
>>> machines and logic. I think these are likely key to any formal definition
>>> of consciousness. …Jason
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Jason, there is a reason I stopped worrying in the past coupla weeks
>>> that ChatGPT was going to cause the singularity.  I am a big Hofstader fan,
>>> read Escher Godel Bach twice, cover to cover, invested a lot of time into
>>> that marvelous work.  He convinced me that machine consciousness (or any
>>> other sentience or self-awareness) requires a type of recursion.
>>> Hofstadter goes on at length about recursion and self-reference, the
>>> importance of Godel’s work to understanding ourselves.
>>>
>>>
>>>
>>> I tried to convince myself that two or more ChatGPTs could train each
>>> other on their own time, which is a form of recursion and self-reference,
>>> and that process could perhaps spring into a human-level AGI with a will,
>>> with self-awareness, of all the stuff we think of as us.
>>>
>>>
>>>
>>> Now after studying GPT^2 discussions and GPT^3 discussions, they all
>>> seem to devolve to nothing.  The technology I think is still coming for
>>> that process: two or more AIs training each other using background compute
>>> cycles, but I now don’t think ChatGPT is that technology or is capable of
>>> it.
>>>
>>>
>>>
>>> If you know of examples of GPT-GPT discussions or GPT-any chatbot that
>>> became interesting, do share please.
>>>
>>>
>>>
>>> That belief was behind my comment last week that ChatGPT is not really
>>> thinking, but rather is working language models.
>>>
>>>
>>>
>>> I currently don’t think ChatGPT is the technology capable of causing the
>>> singularity.  I am losing no sleep, not one minute of sleep over ChatGPT.
>>>
>>>
>>>
>>> Oops, partially retract that last comment but in a good way: I am losing
>>> some sleep over ChatGPT, by staying up late to goof with it.  It is the
>>> coolest software tool to come along in a long time.
>>>
>>>
>>>
>>> spike
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230322/5853b0b4/attachment.htm>


More information about the extropy-chat mailing list