[ExI] More thoughts on sentient computers

Tara Maya tara at taramayastales.com
Thu Feb 23 04:55:03 UTC 2023


Yes, I forgot to add that consciousness and life itself must also be self-directed, that is, its actions must lead to its survival and reproduction. Right now, I don't see ChatGPT as anymore self-directed than a sword or a typewriter. It's an extension of the human mind directing it. Yes, it has a delightful sense of "originality" as if it's inventing things, but right now, this is more like dealing a deck of Tarot cards to get new ideas. The main recursivity is still human-(machine extended) to human-(machine extended). Spike if you and I both trained our own Chats to argue against each other, they'd both be extensions of our minds wouldn't they?

We would have mega minds but I don't think this would lead Chats to be independent anymore than wings are independent of birds. Am I missing something?

Tara Maya

> On Feb 22, 2023, at 8:10 PM, Gadersd via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> Spike, training these models as they run is definitely possible but the main barrier here is cost. The expense required to train models as large as ChatGPT requires an investment of at least hundreds of thousands of dollars. For each individual to have a personalized ChatGPT would require us all to be relatively wealthy. Give it a few years and computing costs will go down or maybe an improvement of the model architecture will enable lower cost training, but for now most of us will have to be content with fixed models.
> 
> I’m personally hoping for an architectural improvement that will endow these models with persistent memory so that they can in a sense cheaply train themselves as they run as our own brains do. This however has not been developed yet as far as I am aware.  If these models train/learn on their own outputs then this may enable them to develop a model of their own function which may endow them with embodied characteristics and self-consciousness.
> 
>> On Feb 22, 2023, at 10:15 PM, spike jones via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> 
>>  
>>  
>> …> On Behalf Of Tara Maya via extropy-chat
>> ..
>>  
>> >…I too suspect that consciousness is recursive, redundant (created by duplication of older systems in the nervous system to create new systems) and iterative or self-referential. I'm not sure how relevant it is to general intelligence; or maybe it's impossible to have sentience without it. To me, that is a great question which only another sentient species, or a zoo of them, could answer…  Tara
>>  
>>  
>> OK so what if we start with something like a collection of untrained ChatGPTs, then introduce them to a chosen subset of text, such as… my own posts to ExI for the past 20 years, Tara you do likewise, or other text you have generated such as your writings.  Then we allow the GPTs to browse the internet randomly or at their discretion.  Then we have them debate each other, and in so doing, train each other.  Would that be a kind of recursion?
>>  
>> What if a pristine ChatGPT is allowed to train on internet text at its discretion for a coupla weeks, then a copy is saved, then it is allowed to browse for two more.  Then the two versions could debate itself.  That would be self-referential in a way.
>>  
>> spike
>>  
>>  
>>  
>>  
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/19ff83cc/attachment.htm>


More information about the extropy-chat mailing list