[ExI] Peer review reviewed AND Detecting ChatGPT texts

Brent Allsop brent.allsop at gmail.com
Tue Jan 10 17:28:18 UTC 2023

Hi Adrian
[image: 3_functionally_equal_machines_tiny.png]
On Mon, Jan 9, 2023 at 9:40 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Mon, Jan 9, 2023 at 7:43 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> On Mon, Jan 9, 2023 at 6:00 PM Adrian Tymes via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>> I believe the disconnect here is what sort of AI is meant.  I refer to
>>> AI in general.  I agree that a chat bot with no visual system can not know
>>> red like we can.  However, I believe that a similar AI that included a
>>> visual system with color sensors, like the robot depicted in your red
>>> apple/green apple/machine apple image, could have its own version of
>>> redness.
>> So you're asserting that self driving cars represent the colors of
>> objects they "see" with something more than strings of abstract 1s and 0s,
>> which are specifically designed to be abstracted away from whatever
>> physical properties happens to be representing them (with transducing
>> dictionary mechanisms), and for which you can't know what the various
>> strings mean, without a dictionary?
> No more than this is the case for our brains.  Which it isn't.
> Also, self-driving cars aren't the best example, as most of them don't see
> in more than one color.
> That said, my Tesla represents, to me via its dashboard, shapes nearby as
> abstractions from the 1s and 0s of its sensors, and those abstractions had
> to have been specifically designed.  Maybe it can't see in color, but it
> demonstrates perceiving a "carness" vs.a "busness" vs. a "pedestrianess"
> sense of nearby objects.  There are various concepts in programming, such
> as object oriented programming, that are all about attaching such qualities
> to things and acting based on which qualities a thing has.  The qualities
> are necessarily digital representations, rather than literally the actual
> physical properties.  Though even in this case it can be claimed that some
> sort of dictionary-like thing is involved.
> And even in our brain, all thoughts can ultimately be reduced to the
> electrical potentials, neurotransmitter levels, and other physical
> properties of the neurons and synapses.  Just because there is a physical
> representation doesn't mean there isn't a larger pattern with properties
> that only emerge from the whole as a set rather than the individual
> parts, especially the parts in isolation.  Trying to determine which single
> neuron in the brain is the mind is as futile as trying to determine which
> single 1 or 0 in an executable file is the program.

 Yes, I understand all this, and it is all true.

And you don't need to give a robot a camera, to augment its brain to have
>> whatever it is in your brain that has a redness quality, so it can say: "oh
>> THAT is what your redness is like."
> That augment seems to inherently require some sort of visual input - or a
> simulation of visual input - as part of it.

But this indicates that I am talking about something completely different
than what you are talking about.
You seem to be ONLY talking about the fact that all 3 of the systems in the
above image can tell you the strawberry is red.  They can all pick the
strawberry (or tell you how this is different from a car, bus, or
pedestrian), and they can all be equally intelligent.

Your last statement seems to indicate you don't understand what color is
and isn't (if you do understand this model of color, could you explain
it?).  It indicates to me you don't understand the radically different way
the above systems do computation.  My prediction is that one way is far
more efficient at performing the required computation than the other, and
that is why evolution did it the efficient phenomenal way, and not the
abstract way which requires more abstracting dictionaries.  (the same as
software running directly on bare hardware is more efficient than software
running on an abstracting virtual machine, requiring more mapping
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230110/2e3884eb/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_functionally_equal_machines_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230110/2e3884eb/attachment.png>

More information about the extropy-chat mailing list