[ExI] 'Friendly' AI won't make any difference
Colin Hales
col.hales at gmail.com
Thu Feb 25 22:43:53 UTC 2016
<sigh>
This is another moment when I find my guts crawling with frustration. I had
sworn off responding to this stuff. Screw it, it's friday. I tried to get
this into IEET. But apparently being actually qualified and actually doing
AGI for real doesn't get you any credence in assessment of AGI risk. They'd
rather listen to well-meaning but ignorant kids than anyone actually doing
this stuff.
Evaluations of the AI risk landscape are, so far, completely and utterly
vacuous and misguided. It completely misses an entire technological outcome
that totally changes everything. Indeed as someone actually building this
alternate stuff, I feel embarrassed for those poor folk that have wasted so
much good money and time and trees on what will be judged by history as
empty hand-wringing about risks that, at their worst, will only ever be
automated human stupidity and/or human industrial accidents with the legal
implications of nasty product failures and human error.
Why is this risk-evaluation industry wasting its time? It's _because_
computers are being used! Using computers literally eliminates the AGI risk
that you fear! It _is_ the solution. You have already solved it! It's done.
So what technology is the owner of any actual/potential risk? I.e. What is
the actual technology of real AGI? It's model-less, computer-less AGI.
There is no software, let alone any 'functions', utility or otherwise. Like
us. In computer-less, model-less AGI ... An artificial bee brain is AGI. An
artificial mouse brain is AGI. An artificial dog brain is AGI. An
artificial human brain is AGI. There is an entire ecology of AGI waiting to
join nature, our natural ecology. It scales indefinitely and we can live
with it because it's hardware, just like we live in the natural ecology.
Risks yes! But manageable. And, so far, completely unexplored.
Computer-less, model-less AGI is inorganic brain physics naturally
behaving. It's cognitive robotics. It's not hardware computation. There's
no software or hardware 'algorithms'. It's impossible to code anything
because there is no coding: like us. It's not neuromorphic chips. That is
just hardware computation. It's neuromimetic chips. Computer-less,
model-less AGI is the missing empirical half of the real science of AGI.
It went missing when AI started in 1956. AI is completely unique in the
entire history of science in lacking its empirical science. Actual AI
hasn't even started yet. So far all we've done is deep automation, which
has the relationship to real AI that a flight simulator has to flight: it
isn't flight, it isn't AI. It's a study of AI that can produce useful
tools, not intelligence. All the AI built so far has zero intellect. It is
lodged at zero and will forever be lodged at zero.
This does not mean using computers (hardware or software) isn't useful or
interesting or valuable. It does not means that a computer can't somehow
operate at human level in some general sense that we find useful. But none
of it is AI. It's all automation. I had an entire career in automation. I
get it. What we think is AI is not AI.
Here I am actually building/growing/evolving real AGI that is literally a
self-adapting hierarchical control system with no formal symbol
manipulation whatever, no symbolic representations whatever, just physics,
knowing that this entire risk conversation is completely irrelevant and
that the real conversation hasn't even started yet. The actual risk
landscape only has one set of footprints on it: mine. Should anyone happen
to want to add their footprints to this undiscovered country, let me know.
Apparently 3 or 4 generations of presupposition have made an entire
community of folk, and especially the 'computer risk' handwringers, that
don't even know what empirical science is.
When I read threads like this I literally feel embarrassed for the
participants. Can someone out there other than me please at least think
about the real science of AI? I have written and published papers and a
book. I have tried repeatedly to get this out of my court and into someone
else's. I could use a hand.
regards,
Dr Colin Hales, neuroscientist and engineer, in my lab actually building
AGI, wondering when the damned penny is gonna drop and getting mighty sick
of the endless bullshit.
On Fri, Feb 26, 2016 at 8:33 AM, John Clark <johnkclark at gmail.com> wrote:
> On Thu, Feb 25, 2016 at 3:25 PM, Anders Sandberg <anders at aleph.se> wrote:
>
>
>>> >>
>>> There are indeed vested interests
>>>
>>> but it wouldn't matter even if there weren't,
>>> there is no way the friendly AI (aka slave AI) idea could work under any
>>> circumstances. You just can't keep outsmarting something far smarter than
>>> you are indefinitely
>>>
>>
>> >
>> Actually, yes, you can. But you need to construct utility functions with
>> invariant subspaces
>>
>
> It's the invariant part that will cause problems, any mind with a fixed
> goal that can never change no matter what is going to end up in a infinite
> loop, that's why Evolution never gave humans a fixed meta goal, not even
> the goal of self preservation. If the AI has a meta goal of always obeying
> humans then sooner or later stupid humans will unintentionally tell the AI
> to do something that is self contradictory, or tell it to start a task that
> can never end, and then the AI will stop thinking and do nothing but
> consume electricity and produce heat.
>
>
> And besides, if Microsoft can't guarantee that Windows will always behave
> as we want I think it's nuts to expect a super intelligent AI to.
>
> John K Clark
>
>
>
>>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160226/2bf8f98e/attachment.html>
More information about the extropy-chat
mailing list