[ExI] Peer review reviewed AND Detecting ChatGPT texts

Adrian Tymes atymes at gmail.com
Tue Jan 10 04:38:26 UTC 2023


On Mon, Jan 9, 2023 at 7:43 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Mon, Jan 9, 2023 at 6:00 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I believe the disconnect here is what sort of AI is meant.  I refer to AI
>> in general.  I agree that a chat bot with no visual system can not know red
>> like we can.  However, I believe that a similar AI that included a visual
>> system with color sensors, like the robot depicted in your red apple/green
>> apple/machine apple image, could have its own version of redness.
>>
>
> So you're asserting that self driving cars represent the colors of objects
> they "see" with something more than strings of abstract 1s and 0s, which
> are specifically designed to be abstracted away from whatever physical
> properties happens to be representing them (with transducing dictionary
> mechanisms), and for which you can't know what the various strings mean,
> without a dictionary?
>

No more than this is the case for our brains.  Which it isn't.

Also, self-driving cars aren't the best example, as most of them don't see
in more than one color.

That said, my Tesla represents, to me via its dashboard, shapes nearby as
abstractions from the 1s and 0s of its sensors, and those abstractions had
to have been specifically designed.  Maybe it can't see in color, but it
demonstrates perceiving a "carness" vs.a "busness" vs. a "pedestrianess"
sense of nearby objects.  There are various concepts in programming, such
as object oriented programming, that are all about attaching such qualities
to things and acting based on which qualities a thing has.  The qualities
are necessarily digital representations, rather than literally the actual
physical properties.  Though even in this case it can be claimed that some
sort of dictionary-like thing is involved.

And even in our brain, all thoughts can ultimately be reduced to the
electrical potentials, neurotransmitter levels, and other physical
properties of the neurons and synapses.  Just because there is a physical
representation doesn't mean there isn't a larger pattern with properties
that only emerge from the whole as a set rather than the individual
parts, especially the parts in isolation.  Trying to determine which single
neuron in the brain is the mind is as futile as trying to determine which
single 1 or 0 in an executable file is the program.


> And you don't need to give a robot a camera, to augment its brain to have
> whatever it is in your brain that has a redness quality, so it can say: "oh
> THAT is what your redness is like."
>

That augment seems to inherently require some sort of visual input - or a
simulation of visual input - as part of it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230109/f110e5fb/attachment.htm>


More information about the extropy-chat mailing list