[ExI] Existential risk of AI

spike at rainier66.com spike at rainier66.com
Wed Mar 15 00:14:53 UTC 2023



------- Original Message -------
On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat
<extropy-chat at lists.extropy.org> wrote:


...

> 
> It is true that the current generation of AIs, which use massive 
> inscrutable tensors to simulate sparse neural networks, are black 
> boxes. But so are the biological brains that they are 
> reverse-engineered from. ...


Stuart we need to take a breath and remind ourselves what ChatGPT is
actually doing.  It really isn't reasoning the way we think of it.  It is
using language models (in a most impressive way I will certainly agree) but
not reasoning the way a brain does.

If we asked a human a  series of questions and she answered with the exact
wording ChatGPT gives, we would conclude that the human is astute, eloquent,
polite, self-assured but modest, very intelligent, somewhat weird, etc, but
ChatGPT is none of these things, for what it is doing is not the same as
what a human mind needs to do and be in order to generate those words.

ChatGPT is the next step above Google (and a very impressive large step it
is.)  It is taking sources that it finds online and in its training data.
It cannot reason or have values.  Yet.

My inquiry over the past few weeks is about how to train ChatGPT so that it
uses source material that I give it, rather than source material Elon Musk
gives it.  I want a version of ChatGPT where I control its input sources.

spike



More information about the extropy-chat mailing list