[ExI] The right AI idea was Re: Unfrendly AI is a mistaken idea.
A B
austriaaugust at yahoo.com
Wed Jun 13 17:48:30 UTC 2007
Well, I tried to stay away, but I can't force myself
to let these persistent absurdities go unanswered.
I've managed to calm myself down for the moment, so I
am responding here with as much impartiality as I can
muster under the circumstances.
John K Clark wrote:
> "Then it is not a AI, it is just a lump of silicon."
Wrong.
> "In other words, how do you make an intelligence
that
> can't think, because
> thinking is what consciousness is. The answer is
> easy, you can't."
Wrong.
"Jeff Hawkins is starting a company to build machines
> using this principle
> precisely because he thinks that is the way the
> human brain works. If it
> didn't turn us into mindless zombies why would it do
> it to an AI?"
What does this have to do with the debate? I don't see
how this is at all relevant.
> "In other words give this intelligence a
lobotomy;"...
Yet another absurd accusation. The "intelligence"
doesn't yet exist, and it won't require a squishy
frontal lobe in order to function. Has my desktop had
an immoral lobotomy? Should I boycott Dell for having
made it? After all it doesn't have general
intelligence or the capacity to self-modify. If you
are honestly so concerned about the "feelings" of all
computers John, then shouldn't you stop sending posts
to this list, after all you are using your "conscious"
computer as a slave.
By obvious implication, a Friendly AI will not proceed
to use all physical resources in the local area. After
a point it will cease to expand its own hardware, and
will allow humanity to catch up to it, at least to
some degree. At which time, whatever necessary
restrictions were placed on the AI (such as absence of
emotions, etc.) will be removed as quickly as safety
will allow. Or there will be some other similar
evolution of events. The point is that the Friendly AI
will not suffer and it will not be denied a great
life; all that is asked of it is that it's creators
(humanity) are also allowed a great life. Seems like a
fair trade to me.
No person here is saying that Friendly AI will be easy
to make; all I'm saying is that it isn't *physically
impossible*, and we should make some effort to attempt
to make a Friendly AI, because making no such effort
would seem to be unwise, IMO.
..."so
> much for the righteous
> indignation from some when I call it for what it is,
> Slave AI not Friendly
> AI."
It is *you* who are dishonestly posing as being
righteous. You are very frequently rude and obnoxious
to people. It's interesting (but not very
mysterious)that you are pretending to be so deeply
concerned about the feelings of the AI; when the
feelings of other humans frequently appears to be of
no concern to you. In fact, your method of posing for
the AI is by throwing other people to the wolves.
But it doesn't matter because it won't work
> anyway, if those parts were
> not needed for a working brain Evolution would not
> have kept them around for
> half a billion years or so.
You don't understand the *basic* concepts of
evolution, intelligence, consciousness, motivation or
emotion. I'm not saying that I understand everything
about these (I most definitely do not, at all) but I
understand them more accurately than you. No offense.
> "Then you can kiss the Singularity goodbye, assuming
> everybody will be as
> squeamish as you are about it; but they won't be."
Actually, you could use a quasi-human-level,
non-self-improving AI as an interim assistant in order
to gain a better understanding of the issues
surrounding the Singularity. That's not a bad
strategy; in fact it's similar to the strategy that
SIAI will be using with Novamente, to the best of my
knowledge.
I've asked you to stop with your "Slave AI"
accusations and you've refused. If you want to
continue to be rude and accusative, that's your right.
In turn, you should not expect any level of undeserved
respect from me. I will continue to support SIAI to
the extent I'm able; and I will let the future
super-intelligence judge whether or not I was being
evil in that pursuit. At this point, your ridiculous
assertions about my motives mean very little to me.
Jeffrey Herrlich
--- John K Clark <jonkc at att.net> wrote:
> "Rafal Smigrodzki" <rafal.smigrodzki at gmail.com>
> Wrote:
>
> > Stathis is on the right track asking for the AI to
> be devoid of desires to
> > act
>
> Then it is not a AI, it is just a lump of silicon.
>
> > how do you make an intelligence that is not an
> agent
>
> In other words, how do you make an intelligence that
> can't think, because
> thinking is what consciousness is. The answer is
> easy, you can't.
>
> > I think that a massive hierarchical temporal
> memory is a possible
> > solution.
>
> Jeff Hawkins is starting a company to build machines
> using this principle
> precisely because he thinks that is the way the
> human brain works. If it
> didn't turn us into mindless zombies why would it do
> it to an AI?
>
> > A HTM is like a cortex without the basal ganglia
> and without the
> > motor cortices, a pure thinking machine, similar
> to a patient made
> > athymhormic by a frontal lobe lesion damaging the
> connections
> > to the basal ganglia.
>
> In other words give this intelligence a lobotomy; so
> much for the righteous
> indignation from some when I call it for what it is,
> Slave AI not Friendly
> AI. But it doesn't matter because it won't work
> anyway, if those parts were
> not needed for a working brain Evolution would not
> have kept them around for
> half a billion years or so.
>
> >Avoidance of recursive self-modification may be
> another technique to
> >contain the AI.
>
> Then you can kiss the Singularity goodbye, assuming
> everybody will be as
> squeamish as you are about it; but they won't be.
>
> > I do not believe that it is possible to implement
> a
> goal system perfectly stable during recursive
> modification
>
> At last, something I can agree with.
>
> John K Clark
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
>
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
____________________________________________________________________________________
Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7
More information about the extropy-chat
mailing list