atymes at gmail.com
Tue Feb 12 18:34:42 UTC 2013
On Tue, Feb 12, 2013 at 3:53 AM, BillK <pharos at gmail.com> wrote:
> On Tue, Feb 12, 2013 at 11:31 AM, Eugen Leitl wrote:
>> Right, just as there is no malware on the Internet.
Analogy failure. Standards for security for software are inherently
laxer than standards for security for hardware, because misbehaving
software is far less obvious. If someone points a gun at you, there
is no mistaking that that's a bad thing. If someone sends you a
link to malware, that's not immediately obviously so bad.
Further, I said "at that level". Copying and pasting someone else's
malware script is far simpler than engineering nanotech. A better
analogy would be breaking into a foreign nation's nuclear launch
chain of command and firing their nukes at one's enemies. Notice
that that has not happened yet, nor is it listed as a serious concern
by military cybersecurity types - even when they're hyping up the
dangers to justify their budget: it'd be too far beyond the actual
>> Advanced cultures can't engineer emergence.
Emergence isn't the same thing as invasion. If an AI emerges,
how do you guarantee its loyalties? Further, an emergent AI
by definition does not have the memories, personality, or
identity of a specific alien, nor any chain of identity linking it
back to the would-be invaders.
Sure, perhaps you can nudge it to have certain sympathies
and modes of thought that might lead it toward wanting to ally
with similar-thinking aliens. But that's not an "invasion" so
much as "making the humans come to the aliens"...
> (Ignoring for the sake of discussion my expectation that advanced
> intelligences don't do invasions. Or, at least not invasions like we
> are used to. Can uplift be considered an invasion?).
...or, indeed, "uplift".
More information about the extropy-chat