[ExI] ai emotions

William Flynn Wallace foozler83 at gmail.com
Fri Jun 28 14:48:02 UTC 2019


stuart/brett wrote    If intelligence is, as you claim, separable from
consciousness

Is dreaming - aka REM sleep -  a variety of consciousness to you?  I have
certainly used my intelligence while I was dreaming - mainly to figure out
what I was trying to say to myself!

In a way, stage 4 sleep in the deepest. That's where the night terrors take
place - which I have never experienced, but I assume there is something
like consciousness there for the terrors to be experienced in.

Other stages of sleep are not accompanied by any consciousness, although we
can drift between a bit of consciousness and sleep in stage 1.  Some people
have said that they can tell when they are entering sleep when their
thoughts go from rational to a bit crazy.

bill w

bill w


On Fri, Jun 28, 2019 at 2:00 AM Stuart LaForge <avant at sollegro.com> wrote:

>
> Quoting Brent Allsop:
>
>
> >> ?Consciousness is not magic, it is math.?
> >
> > How do you get a specific, qualitative definition of the word ?red? from
> > any math?
>
> Red is a subset of the set of colors an unaugmented human can see.
> There I just defined it for you mathematically. In math symbols, it
> looks something very much like {red} C {red, orange, yellow, green,
> blue, indigo, violet}. If you were a lucky mutant (or AI) that could
> perceive grue, then the math would look like {red} C {red, orange,
> yellow, green, grue, blue, indigo, violet}
>
> Whatever unique qualia your brain may have assigned to it is your
> business and your business alone since you cannot express red to me
> except by quantitative measure (650 nm wavelength electromagnetic
> wave) or qualitative example (the color of ripe strawberries). Any
> other description of red only means anything to YOU (Perhaps it makes
> your dick hard, I have no clue, don't really care.)
>
>
> In other words, you can't give me any better a qualitative description
> of red then I can give you. Prove me wrong: What is red, oh privileged
> seer of qualia?  (Yes, that was sarcasm.)
>
> >
> > ?I don't think that substrate-specific details matter that much.?
> >
> > Then you are not talking about consciousness, at all.  You are just
> talking
> > about intelligence.  Consciousness is computationally bound elemental
> > qualities, for which there is something, qualitative, for which it is
> like.
>
> Intelligence and consciousness differ by degree, not by type. Both are
> emergent properties of some configurations of matter. If I were to
> quantitatively rank emergent properties by their PHI value, then I
> would have a distribution as follows: reactivity <= life <=
> intelligence <= consciousness <= civilization
>
> >> ?It is irrelevant that I perceive red as green.?
>
> > Can you not see how sloppy language like this is?  I?m going to describe
> at
> > least two very different possible interpretations of this statement.  If
> > you can?t distinguish between them, with your language, then again, you
> are
> > not talking about consciousness:
>
> You pull a single sentence of mine out of context and then use it to
> accuse me of sloppy language? Here is my precise and unequivocal
> retort: NO! I challenge you to take that out of context.
>
> > 1.       One person is color blind, and represents both red things and
> > green things with knowledge that has the same physical redness quality.
> In
> > other words, he is red green color blind.
> >
> > 2.       One person is qualitatively inverted from the other.  He uses
> the
> > other?s greenness to represent red with and visa versa for green things.
>
> When you said, "Are you talking about your redness, or my redness
> which is like your [sic] grenness?" I meant whichever you meant by the
> quoted statement. My argument holds either way. Unless you believe
> that color-blind people are not really conscious. In which case you
> should be enslaving the colorblind and tithing me 10% of the proceeds.
>
> >
> > You can?t tell which one you?re statement is talking about.  Again,
> you?re
> > not talking about consciousness, if you can?t distinguish between these
> > types of things with your models and language.
>
> Again, my statement reflects yours with the exact same scope. So you
> tell me what I meant.
>
> > Sure, before Galileo, it didn?t matter if you used a geocentric model of
> > the solar system or a heliocentric.  But now that we?re flying up in the
> > heavens, one works, and one does not.  Similarly, now, you can claim that
> > the qualitative nature doesn?t matter, but as soon as you start hacking
> the
> > brain, amplifying intelligence, connecting multiple brains (like two
> brain
> > hemispheres can be connect) or even religiously predicting what ?spirits?
> > and future consciousness will be possible.  One model works, the other
> does
> > not.
>
> I don't see how your model predicts anything except for your ignorance
> of what consciousness is. You say that every consciousness is a unique
> snowflake of amassed qualia, I say that every machine-learning
> algorithm starts out with a random set of parameters and through
> learning its training data, either supervised or unsupervised,
> converges on an approximation of the truth
>
> Every deep learning neural network is a unique snowflake that gets
> optimized for a specific purpose. Some neural networks train very
> quickly, others never quite get what you are trying to teach it. There
> is very much a ghost in the machine and each time you run the
> algorithm, you get a different ghost. If you don't believe me, then
> download Simbrain, watch the turtorial video on Youtube, and I will
> send you a copy my tiny brain to play with. Be the qualitative judge
> of my tiny brain, I dare you.
>
> Do you not understand the implications of me creating a 55 neuron
> brain and teaching it to count to five? Do you not understand the
> implication of my tiny brain being able to distinguish ALL three-bit
> patterns after only being trained on SOME three-bit patterns? Do you
> not see the conceptualization of threeness that was occurring?
>
> > In fact, my prediction is the reason we can?t better understand how
> > we subjectively represent visual knowledge, is precisely because everyone
> > is like you, qualia blind, and doesn?t care that some people may have
> > qualitatively very different physical representations of red and green.
>
> Quit calling me "qualia blind". I am not sure what you mean by it, by
> it sounds vaguely insulting like you are accusing me of being a
> philosophic zombie or something. I assure you there is something that
> it is qualitatively like to be me, even if I can't succinctly describe
> it to you in monkey mouth noises. I could just as easily accuse you of
> being innumerate and a mathphobe, so either explain what you mean or
> knock it off.
>
> >
> > If you only care about if a brain can pick strawberries, and don?t care
> > what it is qualitatively like, then you can?t make the critically
> important
> > distinctions between these 3 robots
> > <
> https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing
> >
> > that are functionally the same but qualitatively very different, one
> being
> > not conscious at all.
>
> No being can be deemed conscious without some manner of inputs from
> the real world. That is the nature of perception. A robot without
> sensors cannot be conscious. If that is what you mean by an "abstract
> robot" than I agree that it is not conscious. On the other hand, a
> keyboard is a sensor. A very limited sensor but a sensor nonetheless.
>
> >> ?Nothing in the universe can objectively observe anything else.?
>
> > All information that comes to our senses is ?objectively? observed and
> > devoid of any physical qualitative information, it is all only abstract
> > mathematical information.  Descartes, the ultimate septic, realized that
> he
> > must doubt all objectively observed information.
>
> You are in no way an objective observer. Any information that may have
> been objective before you observed it became biased the moment you
> perceived it. That is because your brain filters out and flat out
> ignores out any information that does not have relevance to Brent. Why
> else could you not see the color grue unless it had no survival
> advantage to you or your ancestors? Even now, your inborn Brentward
> bias is seething with the need to disagree with me: your primal and
> naked need to impose Brent upon me and the rest of the world. Can't
> you feel it?
>
> > But he also realized: ?I
> > think therefore I am?. This includes the knowledge of the qualities of
> our
> > consciousness.
>
> No it doesn't. Thinking pertains to logic and abstracts and not to
> qualia which are in the realm of that what you perceive and feel.
> Descartes said that his ability to make logical inferences entailed
> that he existed. If intelligence is, as you claim, separable from
> consciousness, then Descartes did little more than make a good case
> that he was intelligent. In fact he made it point to explicitly assume
> that all his perceived qualia were the work of some kind of malicious
> demon trying to mislead him about his existence through his senses or
> something similarly paranoid. In any case, if anyone was "qualia
> blind" it was your man Descartes, who used imagined demons to come up
> with a definition of himself that did not incorporate sensory
> information. Nonetheless, I don't think Descartes was a philosophic
> zombie.
>
> > We know, absolutely, in a way that cannot be doubted, what
> > physical redness is like, and how it is different than greenness.  While
> it
> > is true that we may be a brain, in a vat.  We know, absolutely, that the
> > physics, in the brain, in that vat exist, and we know, absolutely and
> > qualitatively, what that physics (in both hemispheres) is like.
>
> How could we know for sure what what the physical redness of ripe
> strawberries looks like when they would look different in the light
> and the shadow?
>
> https://en.wikipedia.org/wiki/Checker_shadow_illusion
>
> > Let?s say you did objectively detect some new ?perceptronium?.  All you
> > would have, describing that perceptronium, is mathematical models and
> > descriptions of such.  These mathematical descriptions of perceptronium
> > would all be completely devoid of any qualitative meaning.  Until you
> > experienced a particular type of perceptronium, directly, you would not
> > know, qualitatively, how to interpret any of your mathematical objective
> > descriptions of such.
>
> Perceptronium is Tegmark's notion and not mine. I am not sure that as
> a concept it adds much to the understanding of consciousness.
>
> > Again, everything you are talking about is what Chalmers, and everyone
> > would call ?easy? problems.  Discovering and objectively observing any
> kind
> > of ?perceptronium? is an easy problem.  We already know how to do this.
> > Knowing, qualitatively, what that perceptronium is qualitatively like, if
> > you experienced it, directly, is what makes it hard.
>
> Being Brent is necessarily like being Brent. And if I were born in
> your stead, then I would necessarily be Brent. Moreover, you are being
> of finite information in that your entire history, your every thought,
> and your every deed can be described by a very large yet nonetheless
> finite number of true/false or yes/no questions and their answers. The
> smallest number of such yes/no questions and answers would equal your
> Shannon entropy.
>
> That means that there is a unique bitstring that describes you. The
> sum total of every discernible thing about you can be expressed as a
> very large integer. It would be the most compressed form of you that
> it is possible to express.
>
> > The only ?hard? part of consciousness is the ?Explanatory Ga
> > <https://en.wikipedia.org/wiki/Explanatory_gap>p?, or how do you eff the
> > ineffable nature of qualia.
>
> There is no "explanatory gap" because it is filled in by natural
> selection quite nicely. There are some qualia invariants that can be
> identified and experienced quite universally. For example, I know what
> your pain feels like. It feels unpleasant. I know that because our
> ancestors evolved to feel pain so they would try to avoid dangerously
> unhealthy environments and behaviors.
>
> > Everything else is just easy problems.  We
> > already know, mathematically what it is like to be a bat.  But that tells
> > you nothing, qualitatively about what being a bat is like.
>
> You are right, that's where technology can help. If you go
> hang-gliding on a moonless night while wearing a pair of these sonar
> glasses, you might come close to knowing what it is like to be a bat.
>
> http://sonarglasses.com/
>
> Alternatively, since you are what you eat, you could just eat a bat
> and describe how it makes you feel. ;-)
>
>
> Stuart LaForge
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20190628/d1339bee/attachment.htm>


More information about the extropy-chat mailing list