[ExI] fun outsider's view on ai
William Flynn Wallace
foozler83 at gmail.com
Mon May 9 20:46:35 UTC 2016
I'm neither a Singularitarian nor an AItheist. I think human-level AI is
inevitable, if President Trump doesn't manage to wipe out the human race
first :-). But I don't buy the notion that super intelligence is akin to a
superpower, and don't think it's necessary for an AI to have consciousness,
human-like emotions, or the ability to set its own goals, and without those
there is no need to fear them. dave
If you want an AI to be superintelligent, why reference the neuron, Spike?
Human brains are so fallible it's just silly. A person super intelligent
about one thing is totally at a loss about many other things. I think
brains must be still evolving, because as they are, they are cobbled
together among available equipment and have functioned well enough to get
us to the present. You don't have to be a psychologist to see the
irrationality, the emotional involvement, the selfishness, of the output of
human brains. There are many functions of brains that we can do well
without entirely. Start with all the cognitive errors we already know
about.
OK, so what else can we do? Every decision we make is wrapped up in
emotions. That alone does not make them wrong or irrational, but often they
are. Take them out and see what we get. Of course they are already out of
the AIs we have now. So here is the question: do we really want an AI to
function like a human brain? I say no. We are looking for something
better, right?
Since by definition we are not yet posthumans, how would we even know that
an AI decision was super intelligent? I don't know enough about computer
simulations to criticize them, but sooner or later we have to put an AI
decision to experimental tests in the real world not knowing what will
happen.
In any case, I don't think that there is any magic in the neuron. It's in
the connections. And let's not forget about the role of glial cells, about
which we are just barely aware. (see The Other Brain by Douglas Fields)
Oh yeah, and the role of the gut microbiome - also just barely aware of
its functions. Not even to mention all the endocrine glands and their
impact on brain functions. Raising and lowering hormones has profound
effects on functioning of the brain. Ditto food, sunspots (?), humidity
and temperature, chemicals in the dust we breathe, pheromones, and drugs (I
take over 20 pills of various sorts, Who or what could figure out the
results of that?) All told, an incredible number of variables, some of
which we may not know about at present, all interacting with one another,
our learning, and our genes.
All told, we are many decades away from a good grasp of the brain, maybe
100 years. A super smart AI will likely not function at all like a human
brain. No reason it should. (boy am I going to get flak on this one)
bill w
On Mon, May 9, 2016 at 11:27 AM, Dave Sill <sparge at gmail.com> wrote:
> On Mon, May 9, 2016 at 10:44 AM, spike <spike66 at att.net> wrote:
>
>>
>>
>> Nothing particularly profound or insightful in this AI article, but it is
>> good clean fun:
>>
>>
>>
>>
>> https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible?utm_source=Aeon+Newsletter&utm_campaign=6469cf0d50-Daily_Newsletter_9_May_20165_9_2016&utm_medium=email&utm_term=0_411a82e59d-6469cf0d50-68957125
>>
>
> Yeah, not bad. Mostly on the mark, IMO, but he says a few things that are
> just not rational.
>
> He reminds me a little of Roger Penrose’s take on the subject from a long
>> time ago: he introduces two schools of thought, pokes fun at both while
>> offering little or no evidence or support, then reveals he is pretty much a
>> follower of one of the two: the Church of AI-theists.
>>
>
> To be fair, he says both camps are wrong and the truth is probably
> somewhere in between. And I agree.
>
>
>> There are plenty of AI-theists, but nowhere have I ever seen a really
>> good argument for why we can never simulate a neuron and a dendrite and
>> synapses. Once we understand them well enough, we can write a sim of one.
>> We already have sims of complicated systems, such as aircraft, nuclear
>> plants and such. So why not a brain cell? And if so, why not two, and
>> why not a connectome and why can we not simulate a brain? I have been
>> pondering that question for over 2 decades and have still never found a
>> good reason. That puts me in Floridi-dismissed Church of the
>> Singularitarian.
>>
>
> Yeah, his "True AI is not logically impossible, but it is utterly
> implausible" doesn't seem to be based on reality.
>
> I'm neither a Singularitarian nor an AItheist. I think human-level AI is
> inevitable, if President Trump doesn't manage to wipe out the human race
> first :-). But I don't buy the notion that super intelligence is akin to a
> superpower, and don't think it's necessary for an AI to have consciousness,
> human-like emotions, or the ability to set its own goals, and without those
> there is no need to fear them.
>
> -Dave
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160509/a54c94f4/attachment.html>
More information about the extropy-chat
mailing list