[ExI] Digital Consciousness .

Brent Allsop brent.allsop at canonizer.com
Wed Apr 24 16:11:59 UTC 2013


Hey Gordon,

It’s great to hear from you again!  And thanks for starting up this great
discussion on the most important of all topics (surely where the next
greatest scientific discovery could come from).  And it’s great to see all
the other old guys back!  Stathis, Anders, John, Eugen…

But, Gordon, as always, I’m really struggling to know what, exactly, it is
you believe.  People like Stathis have ‘Canonized’ their views, so I know
concisely what they think, and who else thinks like them, and how many
other experts agree with them.  When someone like Stathis says he’s in the
Functional Property Dualism camp, I know exactly what they believe, and who
else agrees with them, and communication can finally take place – no hours
of fuss and muss, like is going on in this thread already –everyone failing
to communicate, and mostly talking past each other, in infinitely
repetitive yes / no / yes / no ways.  When I listen to what you say, I’m
not even sure if you agree with the general, near unanimous expert
consensus “Representational Qualia Theory” stuff.  (
http://canonizer.com/topic.asp/88/6 ).  Sometimes, I think we agree on many
things, but I can never know for sure, because most of what you say is so
nebulous.

I love what Eugen said about the stanford philosophy page said about the
word “Intentionality” and how it makes “his skin crawl” and all the great
right on reasons he gave for why it is so bad.  None of that is falsifiable
or testable, and all of the behavior described in that can be duplicated by
abstract machines, as long as the representations are interpreted
correctly.  Using terms like that, just makes communication with hard
scientists impossible.

The stuff at Canonizer.com has progressed, significantly, since the last
time we talked.  In addition to now having Canonized Camps from Lehar,
Chalmers, Hameroff, Smythies, Even Daniel Dennett has helped canonize his
latest theory he calls “Predictive Bayesian Coding Theory”  (Notice even he
has abandoned as falsified his previous ‘multiple drafts theory”)
http://canonizer.com/topic.asp/88/21

So my question to you Gordon, is what, exactly, to you think about
consciousness.  Will it be possible to build an artificial machine, in any
way, that has what you think is important, even if it is not an abstracted
digital computer?  And can you say what you believe concisely, so that
others can know what you are saying, enough so that others that think
similarly, can say they agree with you?  Are there any camps at
Canonizer.com, close to what you believe?  If not, which is the closest
camp?  Have you found anyone, even if a philosopher, that believes the same
as you do?

I also like the way Eugen ridicules “philosophers”.  Philosophers are
always talking non falsiable stuff, and they never have any way *(i.e. hard
science) of convincing everyone else to think the same way they think we
should.  Canonizer.com is not philosophy, it is theoretical science, and is
all about what kind of hard evidence would falsify your particular theory,
and convert you to mine.  And it’s all about rigorously measuring for when
some scientific data, or good rational arguments, come along strong enough
to falsify camps, for former supporters of the theory.


Oh and Stathis said:

<<<<
There is an argument from David Chalmers which proves that computers
can be conscious assuming only that (a) consciousness is due to the
brain and (b) the observable behaviour of the brain is computable.
Consciousness need not be defined for the purpose of the argument
other than vaguely: you know it if you have it. This makes the
argument robust, not dependent on any particular philosophy of mind or
other assumptions. The argument assumes that consciousness is NOT
reproducible by a computer and shows that this leads to absurdity. As
far as I am aware no-one has successfully challenged the validity of
the argument.

http://consc.net/papers/qualia.html
>>>

Stathis, I think we’ve made much progress on this issue, since you’ve lat
been involved in the conversation.  I’m looking forward to see if we can
now convince you that we have, and that there is a real possibility of a
solution to the Chalmers’ conundrum you believe still exists.

I, for one, am in the camp that believes there is an obvious problem with
Chalmer’s “fading dancing’ qualia argument, and once you understand this,
the solution to the ‘hard problem’, objectively, simulatably, sharably (as
in effing of the ineffable) goes away, no fuss, no muss, all falsifiably or
scientifically demonstrably (to the convincing of all experts) so.  Having
an experience like: “oh THAT is what your redness is like – I’ve never
experienced anything like that before in my life, and was a that kind of
‘redness zombie’ before now.” Will certainly falsify most bad theories out
there today, especially all the ones that predict that kind of sharing or
effing of the ineffable will never be possible.

Brent Allsop



On Wed, Apr 24, 2013 at 8:50 AM, Eugen Leitl <eugen at leitl.org> wrote:

> On Wed, Apr 24, 2013 at 03:57:09PM +0200, Anders Sandberg wrote:
>
> > Eugene, part of this is merely terminology. Power in philosophy is
>
> Yes, I realize that some of it is jargon. However (and not for
> lack of trying) I have yet to identify a single worthwhile
> concept coming out of that field, particularly in the theory of mind.
>
> You used to be a computational neuroscientist before you
> became a philosopher (turncoat! boo! hiss!). What is your professional
> opinion about the philosophy of mind subdiscipline?
>
> > something different than in physics, just as it means something very
> > different in sociology or political science.
> >
> > Then again, I am unsure if intentionality actually denotes anything, or
> > whether it denotes a single something. It is not uncontroversial even
> > within the philosophy of mind community.
> >
> >
> >> "The word itself, which is of medieval Scholastic origin,"
> >> ah, so they admit it's useless.
> >
> > Ah, just like formal logic. Or the empirical method.
>
> Ah, but philosophy begat natural philosophy, aka the sciences.
> Unfortunately, the field itself never progressed much beyond
> its origins. The more the pity when a stagnant field is
> chronically prone to arrogant pronouncements about disciplines
> they don't feel they need to have any domain knowledge in.
>
> >> See, something is fishy with your concept of consciosness. If we look
> >> at at as ability to process information, suddenly we're starting to
> >> get somewhere.
> >
> > Maybe. Defining information and processing is nearly as tricky. Shannon
> > and Kolmogorov doesn't get you all the way, since it is somewhat
> > problematic to even defining what the signals are.
> >
> > Measurability is not everything. There are plenty of risks that do not
> > have well defined probabilities, yet we need and can make decisions
> > about them with above chance success. The problem with consciousness,
> > intentionality and the other theory of mind "things" is that they are
> > subjective and private - you cannot compare them between minds.
>
> I really like that the Si elegans has identified the necessity of
> a behavior library.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130424/7e63eff3/attachment.html>


More information about the extropy-chat mailing list