[ExI] More thoughts on sentient computers
Gadersd
gadersd at gmail.com
Thu Feb 23 04:10:48 UTC 2023
Spike, training these models as they run is definitely possible but the main barrier here is cost. The expense required to train models as large as ChatGPT requires an investment of at least hundreds of thousands of dollars. For each individual to have a personalized ChatGPT would require us all to be relatively wealthy. Give it a few years and computing costs will go down or maybe an improvement of the model architecture will enable lower cost training, but for now most of us will have to be content with fixed models.
I’m personally hoping for an architectural improvement that will endow these models with persistent memory so that they can in a sense cheaply train themselves as they run as our own brains do. This however has not been developed yet as far as I am aware. If these models train/learn on their own outputs then this may enable them to develop a model of their own function which may endow them with embodied characteristics and self-consciousness.
> On Feb 22, 2023, at 10:15 PM, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>
>
> …> On Behalf Of Tara Maya via extropy-chat
> ..
>
> >…I too suspect that consciousness is recursive, redundant (created by duplication of older systems in the nervous system to create new systems) and iterative or self-referential. I'm not sure how relevant it is to general intelligence; or maybe it's impossible to have sentience without it. To me, that is a great question which only another sentient species, or a zoo of them, could answer… Tara
>
>
> OK so what if we start with something like a collection of untrained ChatGPTs, then introduce them to a chosen subset of text, such as… my own posts to ExI for the past 20 years, Tara you do likewise, or other text you have generated such as your writings. Then we allow the GPTs to browse the internet randomly or at their discretion. Then we have them debate each other, and in so doing, train each other. Would that be a kind of recursion?
>
> What if a pristine ChatGPT is allowed to train on internet text at its discretion for a coupla weeks, then a copy is saved, then it is allowed to browse for two more. Then the two versions could debate itself. That would be self-referential in a way.
>
> spike
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/ebd8bbba/attachment.htm>
More information about the extropy-chat
mailing list