[ExI] Hidden rules of Microsoft copilot.

Jason Resch jasonresch at gmail.com
Sat May 13 18:52:35 UTC 2023


On Sat, May 13, 2023, 10:25 AM William Flynn Wallace via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> So we are back to the old problem -
> Is the bot really 'human' or just pretending to be 'human'?  :)
>
> My question:  can it do only what we program it to do, or does it have
> emergent properties?
>

The way these modern AIs work, there's no programming! involved. More
accurately, the only programming/instruction we give to build a language
model is to provide a large set of data and tell them: "learn from this."



> This assumes it is possible to separate intelligence and consciousness.
>
> If nature could have done so, why did it go through all the bother of
> evolving and retaining consciousness (if we could have operated exactly the
> same without all the bother of having it)?
>
> I think all creatures have intelligence - they have adapted to the world
> they are in, and that's the ultimate test of intelligence.  If they can't
> be separated,then all creatures are conscious.  Can we live with that?
>

That's up to be us. Can we live with other surprising ideas like the Earth
moving around the sun, or the idea of a quantum multiverse?

  Are AIs adapting?
>

Like most things, they're subject to Darwinian forces. Right now their
primary selection criterion is their utility to humans, but there's no
guarantee this criterion will remain so into the future.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230513/df000943/attachment.htm>


More information about the extropy-chat mailing list