[ExI] More thoughts on sentient computers
Ben Zaiboc
ben at zaiboc.net
Fri Feb 24 16:04:50 UTC 2023
On 23/02/2023 23:50, bill w wrote:
> another question: why do we, or they, or somebody, think that an AI
has to be conscious to solve the problems we have? Our unconscious mind
solves most of our problems now, doesn't it? I think it does. bill w
That's a good question.
(If our unconscious solves most of our problems now, it's not doing a
very good job, judging by the state of the world!)
Short answer: We don't yet know if consciousness is necessary for
solving certain problems. Or even any problems.
Longer answer: I suspect it is necessary for some things, but have no
proof, other than the circumstantial evidence of evolution.
Consciousness evolved, and we know that evolution rapidly eliminates
features that don't contribute to reproductive fitness, especially if
they have a cost. Consciousness almost certainly has quite a big cost.
This suggests that it's necessary for solving at least some of the
problems that we've met over the last 300 000 years (or at least for
/something/ that's useful), or we wouldn't have developed it in the
first place. Or if it happened by accident, and wasn't good for
survival, we'd have lost it. So we can conclude at the very least that
consciousness has been good for our survival, even if we don't know how.
It strikes me as noteworthy that the kinds of things that our computers
can do well, we do poorly (playing chess, mathematics, statistical
reasoning, etc.), and some things that we have evolved to do well, our
computers do poorly, or can't do at all (hunting and gathering, making
canoes, avoiding hungry lions, making sharp sticks, etc.). Perhaps
consciousness is the (or a) missing ingredient for being able to do
those things. Yes, arms and legs are an obvious advantage, but many
other animals with arms and legs never developed like we did.
As the former things tend to be abstract mental things, and the latter
tend to be highly-co-ordinated, complex physical things, maybe
consciousness has a lot to do with embodiment, and manipulating the
external world in complex ways successfully. Maybe Big Dog is closer to
consciousness than ChatGPT (or, more likely, needs it more).
If Big Dog (or whatever the latest iteration of it is called) had
ChatGPT in its head, as well as all the other stuff it already has,
would it be able to build a canoe and use it to escape from a forest
fire, decide where it was safe to stop, and built a hut? That would be
an interesting experiment.
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230224/f6527e64/attachment.htm>
More information about the extropy-chat
mailing list