[ExI] Survival (was: elections again)
Harvey Newstrom
mail at harveynewstrom.com
Mon Dec 31 20:13:29 UTC 2007
On Monday 31 December 2007 13:40, Eugen Leitl wrote:
> On Mon, Dec 31, 2007 at 01:11:17PM -0500, Harvey Newstrom wrote:
> > Then I want to be the cutest pet ever! Or else the stealthiest scavenger
> > rat
>
> Don't we all! But it's not obvious artificial beings will conserve specific
> traits we equipped them with initially (assuming, we could do that at all,
> which is not obvious) across many generations (not necessarily long in
> terms of wallclock time). Or that they keep environments nicely temperated,
> and full of breathable gases, and allow us to grow or synthesize food.
I assume that such super machines would outgrow the need to adapt their
environment. They would be functional in virtually any environment. So they
might not have any need to rework the existing environments.
> You can't outrun lean and mean Darwin survival machines.
Bacteria can't outrun us either, but we haven't exterminated them all yet.
We don't have to outrun them, just get out of their way.
> > Even given your scenarios, we have a lot of choices on how our
> > subjugation is going to occur.
>
> How much are cockroaches worth on the Wall Street job market? Do they make
> good quants?
Cockroaches have no influence on Wall Street. But they have almost total
control over their own nests and societies. Sure, we wipe them out where
they are in the way. But where they do exist, human have virtually no
influence on them. I doubt most cockroaches even know that humans exist.
> Not any time soon. But, eventually. We might not see it (heck, what is
> another 40-50 years), but our children could very well, and their
> children's children almost certainly (unless they're not too busy fighting
> in the Thunderdome, of course).
I think it is possible, but unlikely that our children will see this. It all
assumes that a self-evolving AI will suddenly evolve quickly. Evolution is a
slow random process that uses brute-force to solve problems. Growing smarter
is not a simple brute-force search. Even a super-smart AI won't instantly
have god-like powers. They will have to perform slow physical experiments in
the real world of physics to discover or build faster communications,
transportation, and utilization of resources. They also will have to build
factories to build future hardware upgrades. These macro, physical processes
are slow and easily disrupted. It it not clear to me that even a
super-intelligent AI can quickly or easily accomplish anything that we really
want to stop.
I'd like to see some specific scenarios that rely on something more specific
than "...a miracle/singularity occurs here..."
--
Harvey Newstrom <www.harveynewstrom.com>
CISSP CISA CISM CIFI GSEC IAM ISSAP ISSMP ISSPCS IBMCP
More information about the extropy-chat
mailing list