[ExI] ai emotions

Brent Allsop brent.allsop at gmail.com
Fri Jun 28 19:22:08 UTC 2019


Hi Stuart,



There are “week”, “stronger” and “strongest” forms predicting how we will
be able to eff the ineffable nature of the physical quality of the redness
someone can directly experience to other people in this “Objectively, We
are Blind to Physical Qualities
<https://docs.google.com/document/d/1uWUm3LzWVlY0ao5D9BFg4EQXGSopVDGPi-lVtCoJzzM/edit?usp=sharing>”
paper.  This paper has now been accepted and presented in multiple
conferences and referenced in the near unanimous “Representational Qualia
Theory <https://canonizer.com/topic/88-Representational-Qualia/6#statement>”
camp.  Basically, you need to first discover which physics (or mathematics)
it is that has a redness quality, then you duplicate that physics in the
other’s brain.  Once you experience that physical (or mathematical)
quality, directly, yourself, you can then say: “Oh THAT is what grue is
like”.



You are basically making the falsifiable prediction that consciousness or
qualia arise from mathematics or functionality.  This kind of functionalism
is currently leading in supporting sub camps to representational qualia
theory, there being multiple functionalists’ sub camps, with more
supporters than the materialist sub camps.



So, let’s take a simplistic falsifiable mathematical theory as an example,
the way we use glutamate as a simplified falsifiable materialist example.
Say if you predict that it is the square root of 9 that has a redness
quality and you predict that it is the square root of 16 that has a
greenness quality.   In other words, this could be verified if no
experimentalists could produce a redness, without doing that particular
necessary and sufficient mathematical function that was the square root of
9.  Experimentalists verifying this would falsify the materialist’s
theories and prove that mathematics or function was more fundamental than
matter and their qualities, which arise from mathematics.  But it will
remain a fact of the matter, that even though redness arises from the
square root of 9, this “arising” would still be a physical process, for
which mathematics was the fundamental controlling interface.  If you wanted
to design a person to represent “red” things with a physical redness
quality, you’d do it by doing the square root of 9.  If you wanted to
represent the same red knowledge with greenness, you would do it by
performing the square root of 16.



But, if the prediction that it is glutamate that has the redness physical
quality that can’t be falsified, and nobody is ever able to reproduce a
redness experience (no matter what kind of mathematics you do) without
physical glutamate, this would falsify functionalist and mathematical
theories of qualia or consciousness.

On Fri, Jun 28, 2019 at 1:02 PM Henry Rivera <hrivera at alumni.virginia.edu>
wrote:

> Stuart,
> What’s with all the hostility directed at Brent, who has been trying to
> clarify discussions around consciousness for years steadfastly? I didn’t
> read such hostility and sarcasm in his response to you. I don’t get the
> sense that he is trying to threaten your worldview or insult your
> intelligence. I admire the calmness, patience, and precision I have
> witnessed in Brent’s dialogs all over the net for well over ten years.
> He’ll likely try to respond to all your points, a task which many would
> abandon by this point in an email string. I see no value or virtue
> aggressive or defensive retorts. Let’s try to keep it civil here in our Exl
> realm please. We have a pretty unique thing here. Not trying to speak for
> Spike or anything, but I’d like to think he’d agree.
> Respectfully,
> -Henry
>
> > On Jun 28, 2019, at 2:55 AM, Stuart LaForge <avant at sollegro.com> wrote:
> >
> >
> > Quoting Brent Allsop:
> >
> >
> >>> ?Consciousness is not magic, it is math.?
> >>
> >> How do you get a specific, qualitative definition of the word ?red? from
> >> any math?
> >
> > Red is a subset of the set of colors an unaugmented human can see. There
> I just defined it for you mathematically. In math symbols, it looks
> something very much like {red} C {red, orange, yellow, green, blue, indigo,
> violet}. If you were a lucky mutant (or AI) that could perceive grue, then
> the math would look like {red} C {red, orange, yellow, green, grue, blue,
> indigo, violet}
> >
> > Whatever unique qualia your brain may have assigned to it is your
> business and your business alone since you cannot express red to me except
> by quantitative measure (650 nm wavelength electromagnetic wave) or
> qualitative example (the color of ripe strawberries). Any other description
> of red only means anything to YOU (Perhaps it makes your dick hard, I have
> no clue, don't really care.)
> >
> >
> > In other words, you can't give me any better a qualitative description
> of red then I can give you. Prove me wrong: What is red, oh privileged seer
> of qualia?  (Yes, that was sarcasm.)
> >
> >>
> >> ?I don't think that substrate-specific details matter that much.?
> >>
> >> Then you are not talking about consciousness, at all.  You are just
> talking
> >> about intelligence.  Consciousness is computationally bound elemental
> >> qualities, for which there is something, qualitative, for which it is
> like.
> >
> > Intelligence and consciousness differ by degree, not by type. Both are
> emergent properties of some configurations of matter. If I were to
> quantitatively rank emergent properties by their PHI value, then I would
> have a distribution as follows: reactivity <= life <= intelligence <=
> consciousness <= civilization
> >
> >>> ?It is irrelevant that I perceive red as green.?
> >
> >> Can you not see how sloppy language like this is?  I?m going to
> describe at
> >> least two very different possible interpretations of this statement.  If
> >> you can?t distinguish between them, with your language, then again, you
> are
> >> not talking about consciousness:
> >
> > You pull a single sentence of mine out of context and then use it to
> accuse me of sloppy language? Here is my precise and unequivocal retort:
> NO! I challenge you to take that out of context.
> >
> >> 1.       One person is color blind, and represents both red things and
> >> green things with knowledge that has the same physical redness
> quality.  In
> >> other words, he is red green color blind.
> >>
> >> 2.       One person is qualitatively inverted from the other.  He uses
> the
> >> other?s greenness to represent red with and visa versa for green things.
> >
> > When you said, "Are you talking about your redness, or my redness which
> is like your [sic] grenness?" I meant whichever you meant by the quoted
> statement. My argument holds either way. Unless you believe that
> color-blind people are not really conscious. In which case you should be
> enslaving the colorblind and tithing me 10% of the proceeds.
> >
> >>
> >> You can?t tell which one you?re statement is talking about.  Again,
> you?re
> >> not talking about consciousness, if you can?t distinguish between these
> >> types of things with your models and language.
> >
> > Again, my statement reflects yours with the exact same scope. So you
> tell me what I meant.
> >
> >> Sure, before Galileo, it didn?t matter if you used a geocentric model of
> >> the solar system or a heliocentric.  But now that we?re flying up in the
> >> heavens, one works, and one does not.  Similarly, now, you can claim
> that
> >> the qualitative nature doesn?t matter, but as soon as you start hacking
> the
> >> brain, amplifying intelligence, connecting multiple brains (like two
> brain
> >> hemispheres can be connect) or even religiously predicting what
> ?spirits?
> >> and future consciousness will be possible.  One model works, the other
> does
> >> not.
> >
> > I don't see how your model predicts anything except for your ignorance
> of what consciousness is. You say that every consciousness is a unique
> snowflake of amassed qualia, I say that every machine-learning algorithm
> starts out with a random set of parameters and through learning its
> training data, either supervised or unsupervised, converges on an
> approximation of the truth
> >
> > Every deep learning neural network is a unique snowflake that gets
> optimized for a specific purpose. Some neural networks train very quickly,
> others never quite get what you are trying to teach it. There is very much
> a ghost in the machine and each time you run the algorithm, you get a
> different ghost. If you don't believe me, then download Simbrain, watch the
> turtorial video on Youtube, and I will send you a copy my tiny brain to
> play with. Be the qualitative judge of my tiny brain, I dare you.
> >
> > Do you not understand the implications of me creating a 55 neuron brain
> and teaching it to count to five? Do you not understand the implication of
> my tiny brain being able to distinguish ALL three-bit patterns after only
> being trained on SOME three-bit patterns? Do you not see the
> conceptualization of threeness that was occurring?
> >
> >> In fact, my prediction is the reason we can?t better understand how
> >> we subjectively represent visual knowledge, is precisely because
> everyone
> >> is like you, qualia blind, and doesn?t care that some people may have
> >> qualitatively very different physical representations of red and green.
> >
> > Quit calling me "qualia blind". I am not sure what you mean by it, by it
> sounds vaguely insulting like you are accusing me of being a philosophic
> zombie or something. I assure you there is something that it is
> qualitatively like to be me, even if I can't succinctly describe it to you
> in monkey mouth noises. I could just as easily accuse you of being
> innumerate and a mathphobe, so either explain what you mean or knock it off.
> >
> >>
> >> If you only care about if a brain can pick strawberries, and don?t care
> >> what it is qualitatively like, then you can?t make the critically
> important
> >> distinctions between these 3 robots
> >> <
> https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing
> >
> >> that are functionally the same but qualitatively very different, one
> being
> >> not conscious at all.
> >
> > No being can be deemed conscious without some manner of inputs from the
> real world. That is the nature of perception. A robot without sensors
> cannot be conscious. If that is what you mean by an "abstract robot" than I
> agree that it is not conscious. On the other hand, a keyboard is a sensor.
> A very limited sensor but a sensor nonetheless.
> >
> >>> ?Nothing in the universe can objectively observe anything else.?
> >
> >> All information that comes to our senses is ?objectively? observed and
> >> devoid of any physical qualitative information, it is all only abstract
> >> mathematical information.  Descartes, the ultimate septic, realized
> that he
> >> must doubt all objectively observed information.
> >
> > You are in no way an objective observer. Any information that may have
> been objective before you observed it became biased the moment you
> perceived it. That is because your brain filters out and flat out ignores
> out any information that does not have relevance to Brent. Why else could
> you not see the color grue unless it had no survival advantage to you or
> your ancestors? Even now, your inborn Brentward bias is seething with the
> need to disagree with me: your primal and naked need to impose Brent upon
> me and the rest of the world. Can't you feel it?
> >
> >> But he also realized: ?I
> >> think therefore I am?. This includes the knowledge of the qualities of
> our
> >> consciousness.
> >
> > No it doesn't. Thinking pertains to logic and abstracts and not to
> qualia which are in the realm of that what you perceive and feel. Descartes
> said that his ability to make logical inferences entailed that he existed.
> If intelligence is, as you claim, separable from consciousness, then
> Descartes did little more than make a good case that he was intelligent. In
> fact he made it point to explicitly assume that all his perceived qualia
> were the work of some kind of malicious demon trying to mislead him about
> his existence through his senses or something similarly paranoid. In any
> case, if anyone was "qualia blind" it was your man Descartes, who used
> imagined demons to come up with a definition of himself that did not
> incorporate sensory information. Nonetheless, I don't think Descartes was a
> philosophic zombie.
> >
> >> We know, absolutely, in a way that cannot be doubted, what
> >> physical redness is like, and how it is different than greenness.
> While it
> >> is true that we may be a brain, in a vat.  We know, absolutely, that the
> >> physics, in the brain, in that vat exist, and we know, absolutely and
> >> qualitatively, what that physics (in both hemispheres) is like.
> >
> > How could we know for sure what what the physical redness of ripe
> strawberries looks like when they would look different in the light and the
> shadow?
> >
> > https://en.wikipedia.org/wiki/Checker_shadow_illusion
> >
> >> Let?s say you did objectively detect some new ?perceptronium?.  All you
> >> would have, describing that perceptronium, is mathematical models and
> >> descriptions of such.  These mathematical descriptions of perceptronium
> >> would all be completely devoid of any qualitative meaning.  Until you
> >> experienced a particular type of perceptronium, directly, you would not
> >> know, qualitatively, how to interpret any of your mathematical objective
> >> descriptions of such.
> >
> > Perceptronium is Tegmark's notion and not mine. I am not sure that as a
> concept it adds much to the understanding of consciousness.
> >
> >> Again, everything you are talking about is what Chalmers, and everyone
> >> would call ?easy? problems.  Discovering and objectively observing any
> kind
> >> of ?perceptronium? is an easy problem.  We already know how to do this.
> >> Knowing, qualitatively, what that perceptronium is qualitatively like,
> if
> >> you experienced it, directly, is what makes it hard.
> >
> > Being Brent is necessarily like being Brent. And if I were born in your
> stead, then I would necessarily be Brent. Moreover, you are being of finite
> information in that your entire history, your every thought, and your every
> deed can be described by a very large yet nonetheless finite number of
> true/false or yes/no questions and their answers. The smallest number of
> such yes/no questions and answers would equal your Shannon entropy.
> >
> > That means that there is a unique bitstring that describes you. The sum
> total of every discernible thing about you can be expressed as a very large
> integer. It would be the most compressed form of you that it is possible to
> express.
> >
> >> The only ?hard? part of consciousness is the ?Explanatory Ga
> >> <https://en.wikipedia.org/wiki/Explanatory_gap>p?, or how do you eff
> the
> >> ineffable nature of qualia.
> >
> > There is no "explanatory gap" because it is filled in by natural
> selection quite nicely. There are some qualia invariants that can be
> identified and experienced quite universally. For example, I know what your
> pain feels like. It feels unpleasant. I know that because our ancestors
> evolved to feel pain so they would try to avoid dangerously unhealthy
> environments and behaviors.
> >
> >> Everything else is just easy problems.  We
> >> already know, mathematically what it is like to be a bat.  But that
> tells
> >> you nothing, qualitatively about what being a bat is like.
> >
> > You are right, that's where technology can help. If you go hang-gliding
> on a moonless night while wearing a pair of these sonar glasses, you might
> come close to knowing what it is like to be a bat.
> >
> > http://sonarglasses.com/
> >
> > Alternatively, since you are what you eat, you could just eat a bat and
> describe how it makes you feel. ;-)
> >
> >
> > Stuart LaForge
> >
> >
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20190628/c3b1d098/attachment.htm>


More information about the extropy-chat mailing list