[ExI] Homo radiodurans (was Maximum Jailbreak)
William Flynn Wallace
foozler83 at gmail.com
Fri Oct 26 23:40:51 UTC 2018
you just can't outsmart something that's smarter than you.
John K Clark
Since everybody is so scared, all the way to downright paranoia, do you
think that techs of the future will be alert to the possible problems you
pose? Of course they will. The first computer that shows signs of what
you are afraid of, will be unplugged from anything it can manipulate and
fixed. The only scenario that would fit your thinking is if all the AIs of
the world woke up at the same time and had the same agenda, somehow
overcoming Asimov's laws or their updated equivalent.
Idle speculation is not my forte', especially when I won't live to see any
outcomes. (So why don't I leave you alone with your ideas? Good idea.
On Fri, Oct 26, 2018 at 5:16 PM John Clark <johnkclark at gmail.com> wrote:
> On Fri, Oct 26, 2018 at 5:57 PM William Flynn Wallace <foozler83 at gmail.com>
> *> I think that the first time an AI tries to take over something and push
>> humans around, there will be a world-wide alarm and similarly programmed
>> AIs will be re-programmed or unplugged*
> Even today when they still aren't as intelligent as we are we couldn't
> unplug all computers without slitting our own throat, we've become far too
> dependent on them for that. And it will not be any easier when they become
> smarter than us because you just can't outsmart something that's smarter
> than you.
> John K Clark
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat