[ExI] Seemingly Conscious AI Is Coming
Jason Resch
jasonresch at gmail.com
Wed Sep 17 17:38:40 UTC 2025
On Wed, Sep 17, 2025, 12:18 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> Hi Jason and Bill,
>
>
>
> On Wed, Sep 17, 2025 at 8:12 AM BillK via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, 17 Sept 2025 at 14:43, spike jones via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> > Jason, perhaps this is tangential to your point: I wondered how that
>> was done (car-bots reading traffic lights based on color.)
>> >
>> > My source tells me they don’t. It knows the position of the lights:
>> red above, yellow in middle green below, and can tell which one is emitting
>> light.
>> >
>> > spike
>> > _______________________________________________
>>
>>
>> Hi Spike
>> The system probably varies between different brands of robo cars.
>> But, asking an AI how it does it -----
>> BillK
>>
>> Phind AI:
>> Self-driving vehicles use sophisticated computer vision systems to
>> reliably detect and classify traffic signals among various environmental
>> light sources. This complex process involves multiple layers of detection
>> and verification to ensure accurate signal recognition.
>>
>> ### Core Detection Methods
>>
>> 1. **Visual Processing Pipeline** - High-resolution cameras capture
>> detailed images of the scene
>> - Advanced algorithms detect circular shapes characteristic of traffic
>> lights
>> - Color analysis distinguishes between red, yellow, and green signals
>> - Multiple frames are analyzed to confirm consistent readings
>>
>
> Yes, both humans and AI systems do this processing to clean up the data
> from the sensors. The final result of both perception systems is to render
> cleaned up models of what is being perceived, which are then engineered to
> be used for decision making. It's just that the model produced by the
> artificial system renders words (like 'red' or '0xFF0000' which can be
> distinguished from 'green' or '0x00FF00') into its knowledge while our
> knowledge is made up of color qualities, like redness which can be
> distinguished from greenness. This is obviously designed or engineered by
> nature to use redness for things that should stand out to us. We can
> engineer things to use different qualities or different words, like using
> color inversion glasses and so on. The bottom line is, this is an
> engineering decision that must be made, in order for some conscious entity
> to be 'like something' in particular.
>
Our visual system is just doing a more complicated, multi-level and
multi-dimensional for of discrimination. If you engineered something to
perform the same kind of multi-level and multi-dimensional discrimination,
it would also experience red and green as we do. You say that red stands
out to get attention. That too is an additional level of discrimination.
You could also engineer a robot to dedicate additional attention to patches
of 0xFF0000 pixels and it would describe that attentional aspect of
0xFF0000. For every aspect of red you can think to describe it, that can be
added as an additional aspect of functional processing for the system
perceiving it. Once you've included every aspect there is, then the
experience of 0xFF0000 will be the same for the robot as red is for the
human visual system.
> Similarly, we can model more than the visible region of the
> electromagnetic spectrum, like an AI does, using words and numbers to
> represent what our sensors are detecting. Both aren't like anything. We
> can 'color map' or 'false color' these regions into our visible light
> range, to use the same color qualities we use for light. Again, this is
> all an engineering decision. It's not enough to ask if something is
> conscious, you need to also ask, what is it engineered to be like. And if
> it isn't engineered to be like anything, it isn't sentient or conscious, in
> my book. It is just equally or more intelligent.
>
If the system can distinguish between the two states of "sensing 0xFF0000"
and "sensing 0x00FF00" then it is perceptive. If it is perceptive then it
is conscious (of its perceptions), and if it is conscious then there is
something it is like to be that conscious thing. Furter, if there is
"something it is like to be sensing 0xFF0000" it is necessarily different
from the "something it is like to be sensing 0x00FF00", otherwise the
system wouldn't be capable of making such a distinction to begin with.
> The Turing test can't tell you if something is conscious, that is unless
> you ask it a question like: "What is redness like for you?"
>
The Argonov test (an extension to the Turning test) is capable of detecting
consciousness. But it isn't capable of detecting all kinds of
consciousness. For this we need a theory based on a logical empirical
foundation. With such a theory in hand, we can say which systems are or
aren't conscious (according to that theory).
According to the theory I have arrived at, any system capable of
intelligent behavior is conscious to at least some degree. The more complex
the perceptual system and range of behavioral options, the more complex the
states of consciousness that being will be capable of.
So worms and thermostats are conscious, but the complexity of their
conscious states is many orders of magnitude less than that of human
consciousness. We can make the analogy between a bacterium and a blue
whale: we are comfortable using the word "alive" to describe each, even
though there is a vast gulf of complexity between the two. That is how I
think we need to understand consciousness. From it we can take away a few
lessons: there are many ways to be conscious, conscious states can be
extremely simple, consciousness is not hard to achieve (but human-level
consciousness is).
Jason
>
>
>
>> 2. **Signal Verification Techniques** - Position tracking: Traffic
>> lights appear in predictable locations
>> - Pattern recognition: Known shapes and arrangements of signals
>> - Motion analysis: Traffic lights remain stationary
>> - Contextual understanding: Relationship to intersections and roads
>>
>> ### Safety Measures
>>
>> 1. **Multiple Confirmation Systems** - Requires agreement between
>> multiple detection methods
>> - Implements voting systems among different sensors
>> - Maintains historical context of signal states
>> - Continuously verifies detections
>>
>> 2. **Error Prevention** - Regular software updates
>> - Sensor calibration checks
>> - Weather condition monitoring
>> - Fallback procedures for uncertain situations
>>
>> The reliability of these systems comes from their multi-layered approach,
>> combining visual detection with contextual awareness and sensor fusion. By
>> verifying traffic signals through multiple independent methods, autonomous
>> vehicles can maintain safe operation even in challenging environmental
>> conditions.
>> --------------------
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250917/b992113b/attachment-0001.htm>
More information about the extropy-chat
mailing list