[ExI] Language models are like mirrors

spike at rainier66.com spike at rainier66.com
Mon Apr 3 02:34:01 UTC 2023


 

 

From: spike at rainier66.com <spike at rainier66.com> 
…

>>…  Until… you ask it about stuff you really really know a lot about.
Then… the spell is broken.  … spike


>…Gordon et al, we can debate and talk right past each other all we want…
Less talk, more test.  Let’s do that this evening, then come back with data.
spike  

 

 

 

Earlier this evening, I posted about working in the computer room and
witnessing new computer users do goofy things, we computer geeks laughing
until we nearly wet our diapers, then afterwards wondering about the ethics
of what we were doing (it included a lotta lotta gags in those days (it was
the times (we weren’t really assholes (depending on how you define the term
of course.))))

 

My colleague from those days (1979-1983) has been a close friend ever
since.  He became a software developer and makes buttloads of money to this
day.  I posted him this evening about my ChatGPT experiment for sentience
detection.  He commented thus:

 

 

 

That's not a good test for sentience.  I know a lot about stuff from my
work, but much of this has never been published, so it wouldn't know
anything about those topics.  That doesn't mean it's not sentient.

 

There's a lot I'd like to do with GPT-4, however, we can't upload any IP,
so my hands are tied.  In fact, they lock down stuff so much that we're
having a hard time releasing any software.

 

I asked it a technical question about the compiler we use (Visual Studio),
but I made a little mistake in my question.  I used the wrong file extension
in the question (but a related file extension).  It gave me a lot of the
standard boilerplate, but part of that was that it said something like
"actually, you need to use xxx.xxx instead of yyy.yyy".  I was amazed.  It
was as if it knew how to use Visual Studio.

 

It definitely has a model for things that exist in the real world as well
as things we use (like Visual Studio).  It can relate those things to other
things in a way that is very similar to how people's brains represent those
concepts, however, it can do it with nearly all common things in the world.
I understand when people say they see a glimmer of AGI in it.  I do too.
That's a far cry from achieving sentience though.

 

To achieve sentience, it will need to be able to learn, and not just during
the training period.  It will also need access to tools, like math
calculators.  It has no ability to search the internet, however, that would
be useful to be able to give answers on current topics.  It would have to
have an overall neural net that controls the whole thing.  Call that it's
ego or soul, but without that I'm not sure you would think it was actually
sentient.  If you look for the definition of sentience, they talk about
feelings.  I don't agree with that.

 

I think we may be getting close to true AGI.  Don't you?  You've got to
admit, it's pretty damned impressive!

 

 

My friend knows a lot more about software than I do.  I’m just a controls
engineer, he’s a software hipster with an actual degree in the field.  So…
at this point I will leave it at this: let’s keep doing tests looking for
insights and ideas, rather than just ability to look up stuff, at which we
already know GPT is really good.

 

spike

 

 

 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 6018 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/a339ecaa/attachment.bin>


More information about the extropy-chat mailing list