[ExI] consciousness vs intuition and insight
Brent Allsop
brent.allsop at comcast.net
Sun Jan 18 17:27:43 UTC 2009
Sam,
Welcome to the 'nuts' crowd. I bet everyone on this list is considered
to be 'nuts' by lots of (but not all) people. And of course, we all know
we're not 'nuts' right?
I completely agree with you that we'll understand consciousness before
we understand higher cognitive things like emotion, intuition and
insight. But from some of the other things you say, I suspect we have
different ideas about what consciousness is.
On the idea of consciousness, I'm in these camps:
http://canonizer.com/topic.asp/88/6
http://canonizer.com/topic.asp/88/7
Could you concisely state just what you believe about consciousness and
what you believe can be done?
Upward,
Brent Allsop
sam micheal wrote:
>
> ACOMA – A COnscious MAchine
>
> Can it be done?
>
> Can it be designed by me?
>
> Sam Micheal
>
> It’s ‘official’; I’m ‘nuts’. I have been officially told by a
> university professor of computer science: “This problem is too big for
> you Sam.” Really? Is that so? Are you 1000% sure?
>
> As a person ‘in love’ (understatement) with systems science, physics,
> and AI, I have taken so many courses from engineering disciplines – I
> have lost count – where and when.. I DO remember a computer vision
> course I took. I DO remember some basic precepts. I DO remember how we
> know almost nothing about scene recognition (this was about 15 years
> ago so perhaps we know a little more now). But if you actually READ my
> proposal, it says NOTHING about dependency on scene recognition. In
> fact, it depends not one IOTA on anything ‘in development’.
>
> This is the ‘beauty’ of our current system. “Instead of pursuing this
> avenue of investigation, /which I doubt you have any real experience
> in/..” [italics added] he continues to suggest I restrict myself to
> more ‘tame’ and approachable areas in computer science. I thanked him
> for his traditional concern. But his ‘concern’ was itself dismissive.
> His department is focused on computer science education. Why should
> they care about conscious machines? “They would have done it by now if
> they could.” (He voiced almost the same sentiment in the same letter.)
> Wow; what a ‘revelation’. And all this to say without actually reading
> my proposal.
>
> Perspective; perspective; perspective. Read Modern Heuristics by
> Michalewicz. If you can understand that, you’re smart. If you can
> apply it, you’re smarter. Now, I’m not saying I’m /that/ smart. ;) But
> I am saying I have some insights about the problem. Key word:
> insights. What’s another key word? Intuition. Now, let me review a
> recent conversation with my mother about consciousness..
>
> “The reason AI people have not developed conscious machines is because
> they have focused on intelligence NOT consciousness. And they have
> made the critical conceptual error in thinking that consciousness is
> dependent on immature technologies like computer vision. It is NOT. I
> contend consciousness is /physical/; we can understand it physically.
> However, much more elusive are concepts like intuition and
> inspiration. I contend we will develop conscious machines /way before/
> we will develop machines with intuition and inspiration.”
>
> My design is more than just ‘physical’; it is information dependent.
> There is a thing in my design called a rule-base. Is this the same
> thing as a database? Is it constructed with data mining? Maybe. Maybe
> not. I try to define some general specifications. I believe I have a
> construct that is ‘rich’ enough (diverse and sophisticated enough) to
> at least mimic consciousness. And I try to provide much more than
> consciousness. I design structures that will assist intelligence and
> self-awareness. Hopefully, these will enhance consciousness. The idea
> is this: I think it is difficult to create consciousness from scratch
> – but not impossible. If we can create a device that is minimally
> aware and also give it some capabilities: intelligence, self-awareness
> (via model), and some capacity for visualization (which to me is Very
> important), we may achieve what most have said is impossible – machine
> consciousness. My construct is perhaps too dependent on visualization.
> My original specification exceeded the current technology (1 mega bits
> cubed). Because that is impossible by current standards, I had to cut
> that down by a factor of one million. Can the thing still be
> self-aware with limited visualization capability? I don’t know. But
> it’s worth trying.
>
> It’s certainly worth more than “This problem is too big for you Sam.”
>
> Sam Micheal, 17/JAN/2009
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
More information about the extropy-chat
mailing list