<div dir="auto"><div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Wed, Sep 17, 2025, 12:18 PM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br><div>Hi Jason and Bill,</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 17, 2025 at 8:12 AM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">On Wed, 17 Sept 2025 at 14:43, spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br>> Jason, perhaps this is tangential to your point: I wondered how that was done (car-bots reading traffic lights based on color.) <br>><br>> My source tells me they don’t. It knows the position of the lights: red above, yellow in middle green below, and can tell which one is emitting light.<br>><br>> spike <br>> _______________________________________________</div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">Hi Spike</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">The system probably varies between different brands of robo cars.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">But, asking an AI how it does it -----</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">BillK</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">Phind AI:</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">Self-driving vehicles use sophisticated computer vision systems to reliably detect and classify traffic signals among various environmental light sources. This complex process involves multiple layers of detection and verification to ensure accurate signal recognition.<br><br>### Core Detection Methods<br><br>1. **Visual Processing Pipeline** - High-resolution cameras capture detailed images of the scene<br> - Advanced algorithms detect circular shapes characteristic of traffic lights<br> - Color analysis distinguishes between red, yellow, and green signals<br> - Multiple frames are analyzed to confirm consistent readings<br></div></div></div></div></div></blockquote><div><br></div><div>Yes, both humans and AI systems do this processing to clean up the data from the sensors. The final result of both perception systems is to render cleaned up models of what is being perceived, which are then engineered to be used for decision making. It's just that the model produced by the artificial system renders words (like 'red' or '0xFF0000' which can be distinguished from 'green' or '0x00FF00') into its knowledge while our knowledge is made up of color qualities, like redness which can be distinguished from greenness. This is obviously designed or engineered by nature to use redness for things that should stand out to us. We can engineer things to use different qualities or different words, like using color inversion glasses and so on. The bottom line is, this is an engineering decision that must be made, in order for some conscious entity to be 'like something' in particular.</div></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Our visual system is just doing a more complicated, multi-level and multi-dimensional for of discrimination. If you engineered something to perform the same kind of multi-level and multi-dimensional discrimination, it would also experience red and green as we do. You say that red stands out to get attention. That too is an additional level of discrimination. You could also engineer a robot to dedicate additional attention to patches of 0xFF0000 pixels and it would describe that attentional aspect of 0xFF0000. For every aspect of red you can think to describe it, that can be added as an additional aspect of functional processing for the system perceiving it. Once you've included every aspect there is, then the experience of 0xFF0000 will be the same for the robot as red is for the human visual system.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Similarly, we can model more than the visible region of the electromagnetic spectrum, like an AI does, using words and numbers to represent what our sensors are detecting. Both aren't like anything. We can 'color map' or 'false color' these regions into our visible light range, to use the same color qualities we use for light. Again, this is all an engineering decision. It's not enough to ask if something is conscious, you need to also ask, what is it engineered to be like. And if it isn't engineered to be like anything, it isn't sentient or conscious, in my book. It is just equally or more intelligent.</div></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">If the system can distinguish between the two states of "sensing 0xFF0000" and "sensing 0x00FF00" then it is perceptive. If it is perceptive then it is conscious (of its perceptions), and if it is conscious then there is something it is like to be that conscious thing. Furter, if there is "something it is like to be sensing 0xFF0000" it is necessarily different from the "something it is like to be sensing 0x00FF00", otherwise the system wouldn't be capable of making such a distinction to begin with.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>The Turing test can't tell you if something is conscious, that is unless you ask it a question like: "What is redness like for you?"</div></div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">The Argonov test (an extension to the Turning test) is capable of detecting consciousness. But it isn't capable of detecting all kinds of consciousness. For this we need a theory based on a logical empirical foundation. With such a theory in hand, we can say which systems are or aren't conscious (according to that theory).</div><div dir="auto"><br></div><div dir="auto">According to the theory I have arrived at, any system capable of intelligent behavior is conscious to at least some degree. The more complex the perceptual system and range of behavioral options, the more complex the states of consciousness that being will be capable of.</div><div dir="auto"><br></div><div dir="auto">So worms and thermostats are conscious, but the complexity of their conscious states is many orders of magnitude less than that of human consciousness. We can make the analogy between a bacterium and a blue whale: we are comfortable using the word "alive" to describe each, even though there is a vast gulf of complexity between the two. That is how I think we need to understand consciousness. From it we can take away a few lessons: there are many ways to be conscious, conscious states can be extremely simple, consciousness is not hard to achieve (but human-level consciousness is).</div><div dir="auto"><br></div><div dir="auto">Jason</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_quote"><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">2. **Signal Verification Techniques** - Position tracking: Traffic lights appear in predictable locations<br> - Pattern recognition: Known shapes and arrangements of signals<br> - Motion analysis: Traffic lights remain stationary<br> - Contextual understanding: Relationship to intersections and roads<br><br>### Safety Measures<br><br>1. **Multiple Confirmation Systems** - Requires agreement between multiple detection methods<br> - Implements voting systems among different sensors<br> - Maintains historical context of signal states<br> - Continuously verifies detections<br><br>2. **Error Prevention** - Regular software updates<br> - Sensor calibration checks<br> - Weather condition monitoring<br> - Fallback procedures for uncertain situations<br><br>The reliability of these systems comes from their multi-layered approach, combining visual detection with contextual awareness and sensor fusion. By verifying traffic signals through multiple independent methods, autonomous vehicles can maintain safe operation even in challenging environmental conditions.</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)">--------------------</div></div></div>
</div>
</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>
</div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>