[extropy-chat] Re: Structure of AI

Eliezer Yudkowsky sentience at pobox.com
Wed Nov 24 09:22:22 UTC 2004


Samantha Atkins wrote:
> 
> On Nov 23, 2004, at 2:51 AM, Eliezer Yudkowsky wrote:
> 
>>> Is a simulation 'detailed enough' the same as a simulation
>>> 'deterministic enough'? Because, in this case, the simulation
>>> would be too strong, that is to say no 'free will' (whatever it 
>>> means) would be allowed.
>>
>> This is where Adrian's rule comes in handy; until you can give me an 
>> experimental test for the presence or absence of free will, you're not 
>> allowed to talk about it.  :)
> 
> Now who's being silly?

I'm quite serious.  Adrian is right about the principle; his foolishness 
lies in lecturing a Bayesian Master on such a simple topic.  If you live 
your life by the precept of testability you shall not go astray.  If you 
cannot give a testable physical predicate for the presence or absence of 
free will, that does, indeed, indicate a deep and fundamental confusion; 
for if there is no physical predicate then whatever you are talking about 
must be orthogonal to physics, hence orthogonal to the universe, hence 
orthogonal to yourself and any thoughts you possess that have been sparked 
by observation of any real thing.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list