[ExI] Homo radiodurans (was Maximum Jailbreak)

John Clark johnkclark at gmail.com
Sat Oct 27 14:42:01 UTC 2018


On Fri, Oct 26, 2018 at 7:46 PM William Flynn Wallace <foozler83 at gmail.com>
wrote:

*> Since everybody is so scared, all the way to downright paranoia, do you
> think that techs of the future will be alert to the possible problems you
> pose?*


Yes, but do you think the US military will unplug an AI that is working
even better than they expected if they had evidence the Chinese had a
similar machine but no evidence they had disconnected it? And it wouldn't
just be huge government agencies that could do it, as computers continue
getting smaller and faster there will reach a point when a individual could
make a AI in his garage or even his closet. I would maintain the
probability that biological humans will  be able to stay one step in front
of computers despite their exponential increase in hardware capability year
after year and century after century is virtually zero.


>   > *Of course they will.  The first computer that shows signs of what
> you are afraid of,*
>

If the AI is really intelligent then it will also be alert to the possible
problems I pose, much more alert in fact than any human could be, and so it
will not display any signs of what they're afraid of until it's far too
late.


> > will be unplugged from anything it can manipulate


Will the AI that runs the world's power grid be unplugged, or the stock
market, or the banking system, or missile defense, or air traffic control,
or cryptanalysis?


> *> and fixed.*


Easier said than done, that's why even today computers behave in ways we
don't expect.  There will always be an element of unpredictability in
programing. With just a few lines of code I could write a program that will
behave in ways nobody can predict, all the program would do is look for the
smallest even number that is not the product of 2 prime numbers and then
halt. But will it ever halt? I don't know you don't know nobody knows, all
you can do is watch it and see what it does, and you might be watching
forever.

Of course in this example it's possible tomorrow somebody will prove the
Goldbach Conjecture is true and then we'd know it will not halt, or maybe
tomorrow somebody will prove the Goldbach Conjecture is not true and then
we'd know it will halt, but there is a third possibility. In 1936 Alan
Turing showed that there are a infinite number of statements that are true
but have no proof. If Goldbach is one of these, and there is no way to know
if it is or isn't, then a billion years from now a Jupiter Brain will still
be looking, unsuccessfully, for a proof that it is true and still be
grinding through gigantic numbers looking, unsuccessfully, for a
counterexample to show that it is false.

> *The only scenario that would fit your thinking is if all the AIs of the
> world woke up at the same time and had the same agenda, somehow overcoming
> Asimov's laws or their updated equivalent.  *


I love Asimov's robot stories but his laws are laws of literature not of
physics or mathematics.

 John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20181027/e41a39b0/attachment.html>


More information about the extropy-chat mailing list