[ExI] AI Motivation revisited

Richard Loosemore rloosemore at susaro.com
Wed Jun 29 15:17:31 UTC 2011


Samantha Atkins wrote:
> On 06/28/2011 11:49 AM, Richard Loosemore wrote:
>> Eugen Leitl wrote:
>>> On Mon, Jun 27, 2011 at 05:28:09PM -0400, Richard Loosemore wrote:
>>>
>>>   
>>>> An ordinary PC, if programmed correctly, might be capable of something  
>>>> approaching full human intelligence in real time.  Probably not, but it  
>>>> is a possibility.
>>>>     
>>>
>>> Definitely not. 
>>>   
>> Well, all I can say that is "definite" is that the above statement is 
>> definitely garbage.  ;-)
>>
>> Until you understand what is involved in the mechanisms of 
>> intelligence, you are in no position to make such a definitive statement.
>>
>
> Are you sure?  Given the known average speed and other performance 
> characteristics of ordinary PCs (and their OS) and any particular 
> model of an AGI it should be quite possible to say pretty definitively 
> whether that model can be usefully realized on that hardware.  This is 
> an engineering task that does not require deep definitive knowledge of 
> what mechanisms are capable of producing intelligence.
Well, let me answer by throwing the question back to you:  in your 
comment you said ".... and any particular model of an AGI", which is 
equivalent to my saying "... until you understand what is involved in 
the mechanisms of intelligence".

Both of these statements imply that *given* a particular model of 
intelligence that is known to work, you can talk about whether that will 
be implementable on a PC, but that is not the same as declaring that no 
future implementation will run on a PC.

There are no known viable models of the structure of an AGI at the 
moment, so my main point is that definitive statements about the 
implementation needs of a future theory are, well, kinda premature.

In addition, my own calculations (given long ago in other posts on the 
topic) indicate that an architecture involving the class of cognitive 
system models that I work with (crudely speaking, relaxation systems in 
which there is a correspondence between cortical columns and relaxation 
cells) would be implementable on a machine that is within about one 
order of magnitude of the power of a PC.  (To be more precise:  the 
power of the graphics coprocessor in a PC, since that would have more 
appropriate parallelism and granularity).

That is a very rough calculation, but it does mean that these 
declarations that I hear, to the effect that we will need one PC per 
neuron minimum (or similar), or statements such as Eugen's, above, are 
profoundly useless and misleading.



Richard Loosemore

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110629/8da8987f/attachment.html>


More information about the extropy-chat mailing list