[ExI] Interesting take on AI.

Jason Resch jasonresch at gmail.com
Sun Jul 7 12:13:25 UTC 2024


On Sun, Jun 30, 2024, 1:03 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Setting aside whether Moore's Law will continue, exponential growth in
> hardware does not necessarily mean exponential growth in software.  ChatGPT
> running twice as fast or having twice as much memory does not make it twice
> as good by itself, using the functionality measure of "good" that most of
> the public is using for AI.
>


I agree that our subjective "measure of good" does not necessarily follow
the amount of computation resources that goes into something.

For example, with weather prediction, or the prediction of any chaotic
system, an exponential increase in comoutational resources and data
collection will yield only a linear improvement in capability. For example,
we might only get one extra day farther out of prediction if we 10X our
weather prediction system.

Intelligence, so far as it is related to predicting the future, could be
seen as such an example. A super intelligent AI, with 1000X the human
brain's computational power, would not be able to accurately predict the
future much further out than a human could.

All that said, I have another point to make with regards to what you said
about software. In the case of AI, where the software develops itself via
training, I don't think that software is a bottleneck to progress in AI.

I recently wrote the following in a different discussion list about the
progress in AI, and I think it would be useful to share here:


I also see it as surprising that through hardware improvements alone, and
without specific breakthroughs in algorithms, we should see such great
strides in AI. But I also see a possible explanation. Nature has likewise
discovered something, which is relatively simple in its behavior and
capabilities, yet, when aggregated into ever larger collections yields
greater and greater intelligence and capability: the neuron.

There is relatively little difference in neurons across mammals. A rat
neuron is little different from a human neuron, for example. Yet a human
brain has several thousand times more of them than a mouse brain does, and
this difference in scale, seems to be the only meaningful difference
between what mice and humans have been able to accomplish.

Deep learning, and the progress in that field, is a microcosm of this
example from nature. The artificial neuron is proven to be "a universal
function learner." So the more of them there are aggregated together in one
network, the more rich and complex functions they can learn to approximate.
Humans no longer write the algorithms these neural networks derive, the
training process comes up with them. And much like the algorithms
implemented in the human brain, they are in a representation so opaque and
so complex that they escape our capacity to understand them.

So I would argue, there have been massive breakthroughs in the algorithms
that underlie the advances in AI, we just don't know what those
breakthroughs are.

These algorithms are products of systems which have (now) trillions of
parts. Even the best human programmers can't know the complete details of
projects with around a million lines of code (nevermind a trillion).

So have trillion-parameter neural networks unlocked the algorithms for true
intelligence? How would we know once they had?

Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human
brain, with its 600T connections might signal an upper bound for how many
are required, but the brain does a lot of other things too, so the bound
could be lower.

Note that there has been no great breakthrough in solving how human neurons
learn. We're still using the same method of back propagation invented in
the 1970s, using the same neuron model of the 1960s. Yet, simply scaling
this same old approach up, with more training data and training time, with
more neurons arranged in more layers, has yielded all the advances we've
seen: image and video generators, voice cloning, language models, go,
poker, chess, Atari, and StarCraft master players, etc.


So it seems to me, at this point, that hardware is the only impediment to
future progress in creating more intelligent systems.


Jason




> On Sun, Jun 30, 2024, 11:58 AM efc--- via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Hello everyone,
>>
>> Thought you might enjoy this take on AI:
>>
>>
>> https://techcrunch.com/2024/06/29/mit-robotics-pioneer-rodney-brooks-thinks-people-are-vastly-overestimating-generative-ai/
>>
>> "Brooks adds that there’s this mistaken belief, mostly thanks to Moore’s
>> law, that there will always be exponential growth when it comes to
>> technology — the idea that if ChatGPT 4 is this good, imagine what
>> ChatGPT
>> 5, 6 and 7 will be like. He sees this flaw in that logic, that tech
>> doesn’t always grow exponentially, in spite of Moore’s law".
>>
>> More in the article above.
>>
>> Best regards,
>> Daniel
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240707/58b23594/attachment.htm>


More information about the extropy-chat mailing list