[extropy-chat] A quick AGI question

ben benboc at lineone.net
Fri Nov 10 20:23:10 UTC 2006


Yup, i reckon your donkey is in reverse gear all right.
Oops, sorry, i mean your fish is, erm, ackwards.
Sort of.
And then, again, not.


This is the way i think of it:

Imagine you want an oscillator. That is, you want something that will
mark time with a regular beat.

You could do it by wiring up some capacitors, resistors and transistors.
Or you could do it by typing a few lines of your favourite programming
language.

Does it make a difference?

The 'thoughts' of the AGI/Human/whatever are patterns of information
processing. How they are achieved is totally irrelevant. You could
/probably/ make a mind out of water clocks, if you were clever enough
(and had the necessary resources/time/patience). Or you could make it 
out of a high-level programming language that was as far removed from 
the transistors of the computer it was running on, as Shakespeare is 
from Quarks.

So an AGI could be implemented in anything that was capable of
supporting the necessary complexity of information processing - silicon
transistors, rod-logic nanocomputers, optical processors, vats of
chemicals, maybe even clouds of plasma laced with magnetic fields.
You may think 'ah, but all those things are arranging physical objects
in some way', and it's true. But there's nothing to stop you creating a
simulated world in a suitably powerful computer that models, say,
molecules, that interact to create computing elements, that act as the
information-processing substrate of a virtual brain that runs a mind.
You are 2 or 3 levels of abstraction away from the 'physical movements'
that are probably always going to be necessary for any kind of
information processing. And you'd have a bloody hard time actually 
predicting what thoughts the system was having, from examining the state 
of the transistors. It might actually be impossible.

So i think the effort to create AGI is going to concentrate on
understanding the patterns of information themselves, not the physical
stuff that will implement them.

Of course, at the end of the day, there has to be some kind of physical
process, but it's of minimal importance, compared to the patterns of
processing being implemented.

I hope that makes some kind of sense.

ben zaiboc



A B <austriaaugust at yahoo.com> puzzled:

> Hey y'all,
> 
> I'm trying to develop a personal understanding of the (very) 
> elementary theory behind AGI, such that a mere mortal like me can 
> understand intuitively, without having to digest mountains of 
> literature. So I have a basic question, that I'll state using 
> informal (and I'm sure inaccurate) terminology, but I hope I can get 
> the idea across all the same.
> 
> Q) If I understand correctly, the algorithms responsible for human 
> thought are supplied by the physical arrangement of the "active" 
> hardware of the human brain. So, is the premise behind AGI that the 
> active *software* functions by pre-specifying the physical
> arrangement of the hardware (by specifying which transistors are
> active at what time for example) and that the AGI "thoughts" follow
> from this point onward? In other words, the actual "thoughts" of the
> AGI are always secondary to the hardware arrangement supplied by the
> software, and that in both cases it is *ultimately* the *hardware*
> that results in the mind?
> 
> Is this an accurate basic understanding, or is this all just bass - 
> ackwards?
> 
> Best Wishes,
> 
> Jeffrey Herrlich



More information about the extropy-chat mailing list