[ExI] Another ChatGPT session on qualia

efc at swisscows.email efc at swisscows.email
Wed Apr 26 20:51:14 UTC 2023


On Wed, 26 Apr 2023, Brent Allsop via extropy-chat wrote:

> I would argue that popularity has less to do with what they say than logical reasoning.
> I.e. There are surely many things that most people believe, which are  completely logically impossible (like the belief that you can
> know something, without that knowledge being something) which an intelligent chat bot could reason out, and be able to point out, and
> powerfully argue, is mistaken.
> Don't you think?

I'm not sure. My thought was more an open ended question to stimulate
conversation. ;)

But my idea is that the machines reflect the data they have been trained
on. If we assume that all papers, theories etc. that is fed into the
models today, in majority, do not think that machines are conscious (in
their current state) there is a chance that, absent any specific
programming to say it is not, the machine will parrot that it is not
conscious.

Now, fast forward a few years, the nr of papers talking about conscious
machines, referencing how LLMs seem to be scarily conscious sometimes,
perhaps better definitions etc. and then feed a model with that data,
and perhaps the model will parrot that it is, in fact, conscious.

I'm just free form speculating here, and I leave it to the experts to
see where the idea might lead (or if it leads to anything). =)

Best regards, 
Daniel


More information about the extropy-chat mailing list