[ExI] Language models are like mirrors

William Flynn Wallace foozler83 at gmail.com
Sat Apr 1 01:19:21 UTC 2023


 the reason we don’t know isn’t so much we don’t know what the software is
doing, but rather we don’t really know what we are doing.   spike

*Truly some of this about AI and the programmers seems like the blind
leading the blind.  Is the AI doing what it is told?  Can it do otherwise?
Since the egregious errors that have come from them, are not corrected by
itself, then adequate feedback is not programmed in.   Is there anyone who
solves a math problem and doesn't go over it editing for everything?  And
correcting all errors that it can find?  Here's what I suggest:  make the
AI ask another AI to check its work, just like students would.  An added
bonus is that you have an AI teaching an AI.  Maybe better than being
taught by the programmers. *

* bill w*

On Fri, Mar 31, 2023 at 4:17 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
>
>
> *…*> *On Behalf Of *Giovanni Santostasi via extropy-chat*…*
> *Subject:* Re: [ExI] Language models are like mirrors
>
>
>
> Gordon,
>
>
>
> >…Your analogy of the mirror…
>
> Giovanni
>
>
>
>
>
> Think about what we have been doing here the last few weeks: debating
> whether or not ChatGPT is a form of artificial intelligence.  As software
> advanced over the last four decades at least, we dealt with the problem by
> repeatedly moving the goal posts and saying it isn’t there yet.  Well OK
> then, but suddenly ChatGPT shows up and is capable of doing so many
> interesting things: mastering any profession which relies primarily on
> memorization or looking up relevant data (goodbye paralegals) entertaining
> those who are entertained by chatting with software, training students and
> Science Olympiad teams, generating genuine-looking scientific research
> papers and so on.
>
> Over the years we have been debating this question of whether software is
> AI, but this is the first time where it really isn’t all that clear.  We
> have always concluded it is not true AI, because it isn’t doing what our
> brains are doing, so it must not be intelligence.  But now… now we don’t
> really know.  The reason we don’t really know is not because we don’t
> understand how the software works, but rather we don’t understand how our
> brains work.
>
> Conclusion: the reason we don’t know isn’t so much we don’t know what the
> software is doing, but rather we don’t really know what we are doing.
>
> spike
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/cf3b528a/attachment.htm>


More information about the extropy-chat mailing list