[ExI] ai class at stanford
Mike Dougherty
msd001 at gmail.com
Mon Aug 29 22:53:32 UTC 2011
On Mon, Aug 29, 2011 at 1:01 PM, Adrian Tymes <atymes at gmail.com> wrote:
> instance to find where on the plane a flight will induce the most stress -
> and an entire 747, say, would seem to be at least as complex as a
> typical biological cell. If it is impossible to get this level of detail about
> neurons, what is it about neurons that makes this impossible?
>
> If it is not impossible to uncover said details, then it is not impossible to
> emulate them in software once they are uncovered. Once we can emulate
> one - including a complete model of its synapses - then we can emulate
> two, including any synapses they share if the original model correctly
I agree with your sentiment. How do you put one's high holy identity
onto the examination workbench? Ok, so I'm leading the question - let
me explain. I have no intention of devolving this conversation into
the usual muck and mire of qualia and identity and souls and other
such ephemera. However... in our collective Theory of Mind there
should be some explanation the mechanism by which the sense of self
exists. Without at least some vector towards something like a
galactic center of gravity we'll be forever examining this bit or that
and looking to see if it "contains" the magic that makes us all agree,
"I have intelligence, You have intelligence, He or She has
intelligence - but It; it is just a thing that does our bidding." (is
intelligence isomorphic to sentience?) I used the mashup of "galactic
center" and "center of gravity" because I feel that the galactic
center captures the essence of a galaxy-worth of parts working
together to make the thing we recognize and label a galaxy; and the
center of gravity is a simplification of a host of forces that are
easily beyond undergraduate physics to imagine while "center of
gravity" is something so intuitively simple a 3 year old can explain
why the hammer doesn't fall when the head rests on a table with the
handle sticking over the edge. So the central tendency of all the
moving parts at various levels of aggregation
([sub]atomic->[micro|macro]scopic->Astronomic->Cosmic) have some role
to play in the relationship of individual organisms (which are likely
complex systems in themselves) up through networks of organisms.
Human-level intelligence is one goal, Humanity-level intelligence is
another again. Networks of humanity-level intelligences is beyond
[our complete] comprehension.
I have no doubt your approach to iteratively building and tweaking the
world's most complicated clockwork automaton will yield a machine
surprisingly adept at acting like a human (even surpassing human
ability) - but where/when do you call it a person and grant it all the
rights that people currently hold?
I'll pause here to see if anyone is interested. :)
More information about the extropy-chat
mailing list