[ExI] Some new angle about AI

Stathis Papaioannou stathisp at gmail.com
Mon Jan 4 13:13:14 UTC 2010


2010/1/4 Stefano Vaj <stefano.vaj at gmail.com>:
> 2010/1/3 Stathis Papaioannou <stathisp at gmail.com>:
>> I think the argument from partial brain replacement that I have put
>> forward to Gordon shows that if you can reproduce the behaviour of the
>> brain, then you necessarily also reproduce the consciousness.
>> Simulating neurons and molecules is just a means to this end.
>
> "Consciousness" being hard to define as else than a social construct
> and a projection (and a pretty vague one, for that matter, inasmuch as
> it should be extensible to fruitflies...), the real point of the
> exercise is simply to emulate "organic-like" computational abilities
> with acceptable performances, brain-like architectures being
> demonstrably not too bad at the task.

I can't define or even describe the taste of salt, but I know what I
have to do in order to generate it, and I can tell you whether an
unknown substance tastes salty or not. That's what I want to know
about consciousness in general: I can't define or describe it, but I
know it when I have it, and I would like to know if I would still have
it after undergoing procedures such as brain replacement.

> I do not really see anything that suggests that we could not do
> everything in software with a PC, a Chinese Room or a cellular
> automaton, without emulating *absolutely anything* of the actual
> working of brains...

There's no more reason why an AI should emulate a brain than there is
why a submarine should emulate a fish. However, if you have had a
stroke and need the damaged part of your brain replaced, then it would
be important to simulate the workings of your brain as closely as
possible. It is not clear at present down to what level the simulation
needs to be.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list