<div dir="ltr">People make a big deal of referents because they think without direct experiences of things like stones, trees or other things in the world an AI cannot really understand, in particular NLMs. But GPT-4 can now understand images anyway, you can easily combine understanding images and language, images are a form of language anyway. <br>These arguments are trite, and they are all an excuse to give humans some kind of priority over other intelligences, when we are just more sophisticated NLMs ourselves (with other information processing modules added to it). <br>It seems to me that we now have all the ingredients for a true AGI to emerge soon, it is just a question of increasing their training parameters and maybe a 10x or at most 100x higher computational power. That can be achieved in 3-4 years max given the trend in parameter training and computational power observed in the last few years. <br>Soon there will be no excuses for human intelligence exceptionalists. <br>Giovanni </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 23, 2023 at 4:11 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 23, 2023, 6:39 PM Adrian Tymes via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Thu, Mar 23, 2023 at 1:02 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div>Others had argued on this thread that it was impossible to extract meaning from something that lacked referents. it seems you and I agree that it is possible to extract meaning and understanding from a data set alone, by virtue of the patterns and correlations present within that data.<br></div></div></blockquote><div><br></div><div>With the caveat that referents are themselves data, so if we include appropriate referents in that data set then yes. Referents are often referenced by their correlations and matching patterns.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I don't understand what you are saying here.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"><br></div><div dir="auto">I am not convinced a massive brain is required to learn meaning. My AI bots start with completely randomly weighted neural networks of just a dozen or so neurons. In just a few generations they learn that "food is good" and "poison is bad". Survival fitness tests are all that is needed for them to learn that lesson. Do their trained neural nets reach some understanding that green means good and red means bad? They certainly behave as if they have that understanding, but the only data they are given is "meaningless numbers" representing inputs to their neurons.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Just because one type of AI could do a task does not mean that all AIs are capable of that task. You keep invoking the general case, where an AI that is capable is part of a superset, then wondering why there is disagreement about a specific case, discussing a more limited subset that only contains other AIs.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">There was a general claim that no intelligence, however great, could learn meaning from a dictionary (or other data set like Wikipedia or list of neural impulses timings) as these data "lack referents". If we agree that an appropriate intelligence can attain meaning and understanding then we can drop this point.</div></div></blockquote><div><br></div><div>I recall that the claim was about "no (pure) LLM", not "no (general) intelligence".</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">My original claim was for an intelligent alien species.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Also there is a substantial distinction between a dictionary or Wikipedia, and any list of neural impulses. A pure LLM might only be able to consult a dictionary or Wikipedia (pictures included); a general intelligence might be able to process neural impulses.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">In all cases it's a big file of 1s and 0s containing patterns and correlations which can be learned.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto">There is no task requiring intelligence that a sufficiently large LLM could not learn to do as part of learning symbol prediction. Accordingly, saying a LLM is a machine that could never learn to do X, or understand Y, is a bit like someone saying a particular Turing machine could never run the program Z.</div></div></blockquote><div><br></div><div>And indeed there are some programs that certain Turing machines are unable to run. For example, if a Turing machine contains no randomizer and no way to access random data, it is unable to run a program where one of the steps requires true randomness. </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Randomness is uncomputable. And I would go so far to say say true randomness doesn't exist, there is only information which cannot be guessed or predicted by certain parties. This is because true randomness requires creation of information but creation of information violates the principal of conservation of information in quantum mechanics.</div><div dir="auto"><br></div><div dir="auto">In any case my point wasn't that everything is computable, it's that the universality of computation means any Turing machine can run any program that any other Turing machine can run. The universality of neural networks likewise implies not that every function can be learned, but any function that a neutral network can learn can be learned by any neural network of sufficient size. Our brains is fundamentally a neural network. If our brains can learn to understand meaning then this should be in the scope of possibility for other neural networks.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> Much has been written about the limits of psuedorandom generators; I defer to that literature to establish that those are meaningfully distinct from truly random things, at least under common circumstances of significance.</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I am quite familiar with pseudorandom number generators. They are a bit of a fascination of mine.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>One problem is defining when an AI has grown to be more than just a LLM. What is just a LLM, however large, and what is not just a LLM (whether or not it includes a LLM)?</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">That's a good question. I am not sure it can be so neatly defined. For example, is a LLM trained on some examples of ASCII art considered having been exposed to visual stimuli?</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>