[ExI] More thoughts on sentient computers

William Flynn Wallace foozler83 at gmail.com
Fri Feb 24 16:17:41 UTC 2023


We don't understand creativity and thus cannot program it into our
computers.  But that is what gives humans the flexibility the computers
lack.  A computer has to go with probability - humans don't (and anyway are
not very good at it at all).  So wayout solutions, the vast majority of
which don't work or backfire, do happen, improbably.  We want instant
answers from computers, while humans find solutions that took many decades
or centuries to discover, and perhaps were always counterintuitive (aka
crazy).

bill w.



On Fri, Feb 24, 2023 at 10:07 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 23/02/2023 23:50, bill w wrote:
>
> > another question:  why do we, or they, or somebody, think that an AI has
> to be conscious to solve the problems we have?  Our unconscious mind solves
> most of our problems now, doesn't it?  I think it does.  bill w
>
>
> That's a good question.
>
> (If our unconscious solves most of our problems now, it's not doing a very
> good job, judging by the state of the world!)
>
> Short answer: We don't yet know if consciousness is necessary for solving
> certain problems. Or even any problems.
>
> Longer answer: I suspect it is necessary for some things, but have no
> proof, other than the circumstantial evidence of evolution.
>
> Consciousness evolved, and we know that evolution rapidly eliminates
> features that don't contribute to reproductive fitness, especially if they
> have a cost. Consciousness almost certainly has quite a big cost. This
> suggests that it's necessary for solving at least some of the problems that
> we've met over the last 300 000 years (or at least for *something* that's
> useful), or we wouldn't have developed it in the first place. Or if it
> happened by accident, and wasn't good for survival, we'd have lost it. So
> we can conclude at the very least that consciousness has been good for our
> survival, even if we don't know how.
>
> It strikes me as noteworthy that the kinds of things that our computers
> can do well, we do poorly (playing chess, mathematics, statistical
> reasoning, etc.), and some things that we have evolved to do well, our
> computers do poorly, or can't do at all (hunting and gathering, making
> canoes, avoiding hungry lions, making sharp sticks, etc.). Perhaps
> consciousness is the (or a) missing ingredient for being able to do those
> things. Yes, arms and legs are an obvious advantage, but many other animals
> with arms and legs never developed like we did.
> As the former things tend to be abstract mental things, and the latter
> tend to be highly-co-ordinated, complex physical things, maybe
> consciousness has a lot to do with embodiment, and manipulating the
> external world in complex ways successfully. Maybe Big Dog is closer to
> consciousness than ChatGPT (or, more likely, needs it more).
>
> If Big Dog (or whatever the latest iteration of it is called) had ChatGPT
> in its head, as well as all the other stuff it already has, would it be
> able to build a canoe and use it to escape from a forest fire, decide where
> it was safe to stop, and built a hut? That would be an interesting
> experiment.
>
> Ben
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230224/eddbc701/attachment.htm>


More information about the extropy-chat mailing list