[ExI] Mindless Thought Experiments

John K Clark jonkc at att.net
Fri Feb 29 17:37:22 UTC 2008


About a week ago I sent the following to Jaron Lanier, I did not receive a
reply:

=========

I read your article "Mindless Thought Experiments" at:
http://www.jaronlanier.com/aichapter.html

I have a few comments.

> let's assume that software is a legitimate medium for
> consciousness and move on.

No, Let's not move on. Software is very important but it's inert unless
there is hardware it can run on.

> start searching through all the possible computers that
> could exist up to a certain very large size until you find
> one that treats the raindrop positions as a program that
> is exactly equivalent to your brain.

OK.

>Yes, it can be done

I know.

> so is the rainstorm conscious?

Well it certainly doesn't behave intelligently and I would concede it is
conscious only when you show me how a stack of papers that contains
the source code for a very good chess program can beat a grandmaster.

Nobody is saying pure information can do anything, it must be incorporated
into matter, but the important point is that the matter you use can be
generic, any old matter will do.

> You say the rainstorm isn't really doing computation
> it's just sitting there as a passive program- so it
> doesn't count? Fine, then we'll measure a larger
> rainstorm and search for a new computer that treats
> a larger collection of raindrops as implementing BOTH
> the computer we found before that runs your brain as
> raindrops AS WELL AS your brain in raindrops.

So now in addition to a pile of paper consisting of the source code of a
very good chess program there is in addition another pile of paper
consisting of the blueprints of a very good computer; but I still don't
think both those piles of paper together can beat you at a game of chess.
Printing out more paper won't help. Sooner or later somebody is going to
have to build something.

> The thought experiment supply store can ship us an
> even better sensor that can measure the motions,
> not merely the instant positions, of all the raindrops
> in a storm over a period of time.

Sensor? How can pure information sense anything? And in fact I can't quite
see how information could be stored or processed without matter and
energy being involved somewhere down the line.

Your rainstorm contains a trivial amount of information needed for an
intelligence; you also need to specify exactly what sort of computer
hardware this rain program will run on. And the distinction between
hardware and software can become very fuzzy indeed if you consider
hardware as atoms organized in a way specified by information.

If you don't agree with this then you'd have to take seriously my claim to
have a revolutionarily data compressing algorithm that can compress the
entire Vista operating system into one bit; you just have to use the right
computer. If the bit is a zero the computer produces gibberish, but if the
bit is a 1 the computer runs Vista that is hardwired in.

> You might object that the raindrops are not
> influencing each other, so they are still passive,
> as far as computing your brain is concerned.
> Let's switch instead, then, to a large swarm of
> asteroids hurdling through space. They all exert
> gravitational pull on each other. Now we'll use a
> sensor for asteroid swarm internal motion and
> use it to get data that will be matched to an
> appropriate computer to implement your brain.
> Now you have a physical system whose internal
> interactions perform the computation of your mind.

A "mind" that fails the Turing Test, and fails it about as spectacularly
as can be imagined.

> A few DO take the bait and choose to believe
> there are a myriad of consciousnesses everywhere.
> This has got to be the least elegant position ever
> taken on any subject in the history of science.

I would say it is the second least elegant position in the history of
science because all your arguments, to the affect they have any force at
all, could just as easily be used to "prove" that consciousness does not
exist anywhere in the universe.

Well ok, I suppose you could still say that Jaron Lanier is conscious
because you know that from direct experience and direct experience
outranks the scientific method or even logic; but you'd have absolutely
no reason to think any of your fellow human beings are conscious.

> AI proponents usually seize on some specific stage
> in my reducto ad absurdum to locate the point
> where I've gone too far.

I believe if you're going to attempt a reducto ad absurdum proof care must
be taken to ensure that your conclusion is contradictory and not just odd.
Einstein also came up with a thought experiment, he thought it proved that
Quantum Mechanics must be wrong because otherwise things would be odd;
not illogical, not contradictory, just odd. In the last few years this
thought experiment (Bell's Inequality) was actually performed and now we
know that things are indeed odd.

> to the right alien, it might appear that people do
> nothing, and asteroid swarms are acting consciously.

That would be very odd indeed, but as we have no way of communicating
with such an alien or it to us there is little more that can be said about
the matter.

Somewhere in the universe there may be a language where this Email,
without changing a single character, expresses in perfect grammar the
instructions on how to operate a new type of can opener; but as neither
of us knows that language there is little danger of this conversation
being diverted into a discussion of can opener technology.

> In the following discussion, I'll let the terms "smart"
> and "conscious" blur together, even though I
> profoundly disagree that they are interchangeable.

I've been asking people with ideas like yours this question for decades but
I've never received a straight answer, not one:

If intelligence is not always linked to consciousness then why on Earth did
evolution produce it?

After all, however important our inner life is to us it is invisible to
natural selection; indeed the reason you think the Turing Test does not
work is exactly because you believe behavior is not linked to
consciousness. And yet we know with absolute positive 100% certainty
that evolution did produce at least one being who was conscious.
But why? How?

However if consciousness is just what information feels like when it is
being processed then all would be explained. If Turing was wrong then
so was Darwin. I do not think Darwin was wrong. I really don't.

It is also interesting that the parts of the brain that produce emotion (and
presumably you think emotion has something to do with consciousness)
are hundreds of millions of years old, but the parts of the brain that
produce the sort of intelligence we are so proud of are only about one
million years old.

So if evolution found it easier to come up with consciousness than
intelligence I don't see why we would find the opposite to be true; you
could make a stronger case saying a computer may be conscious
someday but it will never be intelligent; although I think it will be both.

> AI has been one of the most funded, and least
> bountiful, areas of scientific inquiry in the
> second half of the twentieth century. It keeps
> on failing and bouncing back with a different name

Speaking of changing names, the reason AI seems to make so little
progress is that as soon as a computer can do something it is decided
that the thing in question does not really require intelligence after all.
Fifty years ago everybody and I do mean everybody, thought that solving
equations or playing a great game of Chess required intelligence.
No more. Fifteen years ago everybody thought it would take a great deal
of intelligence for a librarian to do what Google does. No more.
Intelligence is whatever a computer can't do, YET.

> the moral "equal rights" argument for the
> machine's benefit.

It is of academic interest only if we should treat an intelligent machine
morally; it is much more than of academic interest if the machine should
treat us morally. like it or not the future belongs to the machines not to
us.

> This is where AI crosses a boundary and
> turns into a religion.

Only if you think religion should have a monopoly on the deepest
questions. I don't.

> Hans Moravec is one researcher who
> explicitly hopes for this eventuality.

Me too.

> If we can become machines we don't have to die,
> but only if we believe in machine consciousness.

And only if we are astronomically lucky. Still, given the choice between
slim chance and no chance I'll pick slim chance any day.

> Alan Turing proposed a test in which a computer
> and a person are placed in isolation booths and
> are only allowed to communicate via media that
> conceal their identities, such as typed emails.

And that is exactly what we are doing right now. I disagree with much in
your article but I still think it was written by an intelligent conscious
being. Do you think the same about me? Do you think that I, the writer of
this Email, am conscious? I can't imagine why you would think such a
ridiculous thing as you don't think the Turing Test is any good.

 John K Clark     jonkc at att.net







More information about the extropy-chat mailing list