[ExI] all we are is just llms
ben at zaiboc.net
Wed Apr 26 08:08:14 UTC 2023
On 25/04/2023 23:40, Giovanni Santostasi wrote:
> Hi Ben and Jason,
> Can we all read this article and make sense of it? It seems very
> relevant to the discussion. I love this idea of "semiotic physics". I
> talked to Umberto Eco when I was a student in Bologna about this exact
> idea even if it was very vague in my head at that time. Eco was very
> encouraging but I never was able to spend time on it. I think this
> could be a great tool to understand LLMs.
> On Tue, Apr 25, 2023 at 1:06 PM Ben Zaiboc via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> On 25/04/2023 14:06, spike wrote:
> > Cool thx Ben. I had never thought of it that way, but it is a
> > for hope. If we find enough ways a brain is like a computer, it
> > suggests a mind can (in theory) exist in a computer, which is
> > something I have long believed and hoped is true. If thought is
> > substrate dependent on biology, we are all sunk in the long run.
> Thought cannot be dependent on biology. This is something I've
> about, and done research on, for a long time, and I'm completely
> convinced. It's logically impossible. If it's true, then all of our
> science and logic is wrong.
> What we call 'a computer' is open to interpretation, and it may
> well be
> that minds (human-equivalent and above) can't be implemented on the
> types of computer we have now (we already know that simpler minds can
> be). But that doesn't destroy the substrate indifference argument (I
> never liked the term 'substrate independent', because it conjures
> up the
> concept of a mind that has no substrate. Substrate indifferent is
> accurate, imo (and yes, even that is not good enough, because the
> substrate must be capable of supporting a mind, and not all will
> be (we
> just need to find the right ones. (and OMD, I'm turning into a
> bracket nester!!)))).
I keep seeing references to 'Alignment', with no attempt to define what
that means (as far as I can see. There are too many links and references
to quickly follow, I'll try to have another look later).
Presumably this refers to 'alignment with human interests and goals' but
that's not stated, and it's a huge and seemingly intractable problem anyway.
Apart from that, after reading about 1/10 of the article, I think the
answer, in my case, is 'no'. I don't have either the background or the
intelligence, or both, to make sense of what it's saying. I don't
understand the concept of 'semiotic physics', despite the explanation,
and don't understand what "we draw a sequence of coin flips from a large
language model" actually means. Are they asking a LLM to generate some
random series? What use is that?
I think it's safe to say I don't understand almost everything in this
article (if the first tenth of it is any guide. I'm sure there's some
arcane statistical technique that can predict that!).
I'm afraid it's way beyond me. I just have a little brain.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat