[ExI] Language models are like mirrors

Dave S snapbag at proton.me
Fri Mar 31 22:09:33 UTC 2023


On Friday, March 31st, 2023 at 5:05 PM, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:

> Think about what we have been doing here the last few weeks: debating whether or not ChatGPT is a form of artificial intelligence.

I don't think anyone doesn't think LLMs aren't intelligent. The results speak for themselves. Most of the debate is about understanding and self awareness.

> As software advanced over the last four decades at least, we dealt with the problem by repeatedly moving the goal posts and saying it isn’t there yet.

That's one way to spin it. The problem, as it often is, is one of definition. AI is a spectrum from fairly simple devices like adding machines to game playing programs that challenge human players to game playing machines that beat human champions to chatbots that can pass for human to LLMs that can answers complicated questions, write papers, etc., to human-equivalence to superhuman.

If we declared AI achieved with Deep Blue or AlphaGo we'd have been far short of AGI or human-equivalent AI.

> Well OK then, but suddenly ChatGPT shows up and is capable of doing so many interesting things: mastering any profession which relies primarily on memorization or looking up relevant data (goodbye paralegals) entertaining those who are entertained by chatting with software, training students and Science Olympiad teams, generating genuine-looking scientific research papers and so on.

The main problem with LLMs is that they're only as good as their training data. Their legal arguments might be sound, but they could also nonsense, or not valid in the jurisdiction involved, or outdated by events since they were trained.

> Over the years we have been debating this question of whether software is AI, but this is the first time where it really isn’t all that clear. We have always concluded it is not true AI, because it isn’t doing what our brains are doing, so it must not be intelligence.

There will always be people who refuse to believe that machines can think, feel, understand, create, etc. But since we're machines and we do all of that, that argument is not sound.

> But now… now we don’t really know. The reason we don’t really know is not because we don’t understand how the software works, but rather we don’t understand how our brains work.

We know software is getting more intelligent, but nobody who understands them thinks LLMs are human-equivalent--even though they can do a lot that humans can't do.

> Conclusion: the reason we don’t know isn’t so much we don’t know what the software is doing, but rather we don’t really know what we are doing.

We don't have to know how our brains work to know that they don't work like adding machines, spreadsheets, chess programs, AlphaGo, ChatGPT, etc. We can tell that by using them and understanding how they work.

-Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/8f4c172c/attachment.htm>


More information about the extropy-chat mailing list