[ExI] Personal conclusions
Spencer Campbell
lacertilian at gmail.com
Thu Feb 4 18:12:27 UTC 2010
Stefano Vaj <stefano.vaj at gmail.com>:
> Not really to hear him reiterate innumerable times that for whatever
> reason he thinks that (organic? human?) brains, while obviously
> sharing universal computation abilities with cellular automata and
> PCs, would on the other hand somewhat escape the Principle of
> Computational Equivalence.
Yeah... yeah.
He doesn't seem like the type to take Stephen Wolfram seriously.
I'm working on it. Fruitlessly, maybe, but I'm working on it. Getting
some practice in rhetoric, at least.
Stefano Vaj <stefano.vaj at gmail.com>:
> ... very poorly defined Aristotelic essences would per se exist
> corresponding to the symbols "mind", "consciousness", "intelligence" ...
Actually, I gave a fairly rigorous definition for intelligence in an
earlier message. I've refined it since then:
The intelligence of a given system is inversely proportional to the
average action (time * work) which must be expended before the system
achieves a given purpose, assuming that it began in a state as far
away as possible from that purpose.
(As I said before, this definition won't work unless you assume an
arbitrary purpose for the system in question. Purposes are roughly
equivalent to attractors here, but the system may itself be part of a
larger system, like us. Humans are tricky: the easiest solution is to
say they swap purposes many times a day, which means their measured
intelligence would change depending on what they're currently doing.
Which is consistent with observed reality.)
I can't give similarly precise definitions for "mind" or
consciousness, and I wouldn't be able to describe the latter at all.
Tentatively, I think consciousness is devoid of measurable qualities.
This would make it impossible to prove its existence, which to my mind
is a pretty solid argument for its nonexistence. Nevertheless, we talk
about it all the time, throughout history and in every culture. So
even if it doesn't exist, it seems reasonable to assume that it is at
least meaningful to think about.
Stefano Vaj <stefano.vaj at gmail.com>:
> Now, if this is the case, I have sincerely troubles in finding a
> reason why we should not accept on an equal basis, the article of
> faith that Gordon Swobe proposes as to the impossibility for a
> computer to exhibit the same.
Your argument runs like this:
We have assumed at least one truth a priori. Therefore, we should
assume all truths a priori.
No, sorry. Doesn't work that way. All logic is, at base, fundamentally
illogical. You begin by assuming something for no logical reason
whatsoever, and attempt to redeem yourself from there. That doesn't
mean reasoning is futile. There's a big difference between a logical
assumption (which doesn't exist) and a rational assumption (which
does).
Accepting at face value that we have minds, intelligence, and
consciousness, is perfectly rational. Accepting at face value that
computers can not, is not.
I can't say exactly why you should believe either of these statements,
of course. They aren't in the least bit logical. Make of them what you
will. I have to go eat breakfast.
More information about the extropy-chat
mailing list