[ExI] ai in education
Jason Resch
jasonresch at gmail.com
Sat Mar 7 03:25:15 UTC 2026
On Fri, Mar 6, 2026, 8:21 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> -----Original Message-----
> From: Keith Henson <hkeithhenson at gmail.com>
> ...
>
> >...Spike, either I have a complete misunderstanding of LLM-type AI, or
> you do.
>
> >...There is no source code for any AI that I know about. There is
> training code with which an AI is trained on a vast corpus of text, but
> nothing a programmer would recognize as code. As far as I know, the inside
> of an AI is a mystery to all the companies. Keith
>
> Keith I am no expert on it. But my reasoning is that any computer must
> have a set of instructions on what to do before it does anything. I am
> sure of this: the military will not turn a mystery agent loose with control
> of any weapons. They must know exactly how the thing works before they
> will allow it to control anything.
>
> I think of it as somehow analogous to the autonomous drone target
> recognition system the Berkeley team is competing in. There is definite
> source code there, and it is trained to recognize and distinguish between
> targets, represented by manikins on the course. Last year's competition
> featured a fleeing felon, an injured hiker, a nude sunbather, a lost pet,
> etc. The drone had to figure out which is which, and do the right thing:
> no dropping a fragmentation grenade on the sunbather for instance. Those
> things definitely have code.
>
> LLMs must have some kind of source code, or it would do nothing, ja?
>
There is code that runs the model, which is fairly small and standard. But
this contains none of the intelligence. All the intelligence and nodes of
thinking etc. lies in the model.
The model is a several hundred gigabyte file consisting of a hundreds of
billions of parameters (think of a grid of floating point numbers in
matrices). It is less like human readable code and more like knowing the
raw connection strengths between various neurons in the neural network of
the brain.
There are however, methods to introspect at various levels what the model
is doing. For example, by tracing what parts of the model activate given
different inputs. But this is still a far cry from understanding the
algorithms. It is perhaps more akin to doing a fMRI on a brain and seeing
what parts of the brain light up during different tasks.
Once machines get to having billions+ parts, they can never be fully
comprehensible to our human minds.
Jason
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260306/1c3ed493/attachment.htm>
More information about the extropy-chat
mailing list