[extropy-chat] Fools building AIs
Eugen Leitl
eugen at leitl.org
Fri Oct 6 18:43:05 UTC 2006
On Fri, Oct 06, 2006 at 11:48:17AM -0400, Ben Goertzel wrote:
> You are missing my point ... there is a difference between
>
> a) not provably caring
Sorry, proofs don't work at all in the real physical world. They
work more or less in the formal domain, where their reach
is still very limited (but also very flattening). Since AIs
have to operate in the physical domain to be of any use
(even theoretical physics is not very theoretical, being
grounded in constraints from empirical observations) I don't
see how proofs are of any use there. You certainly can't prove
your way of a literal (brown, wrinkled, slightly soggy) paper bag.
> and
>
> b) uncaring
>
> I agree that a superhuman AI that doesn't give a shit about us is
> reasonably likely to be dangerous. What I don't see is why Eliezer
It's never a single being, it's always a population. Given the
speed of evolution in the solid state, it's going to be a highly
diverse population of agents, very soon. Of all levels of
complexity and motivations.
What a single (especially superhuman) agent is going to do
can't be predicted at all. What a population of diverse critters
with metabolism roaming the countryside can do of that we do
have at least some slight idea. We've been soaking in it
for quite some time now, and the results are entirely rational
and quite deadly to anything not on two legs, and a few
war profiteers.
> thinks an AI that is apparently not likely to be dangerous, but about
How can an artificial species (especially, a superintelligent one)
suddenly operating in the here and now not be dangerous? Even
conventional invasive species wreck havoc to select parts of the
ecosystem. The most invasive species of them all, us, makes much
less distinctions. It crashes biodiversity without discrimination,
and regresses entire ecosystem under human impact stress. We
could end up at the receiving end of it quite suddenly, if bigger
players than us were to burst upon the scene.
> whose benevolence it's apparently formidably different to construct a
> formal proof, is highly likely to be dangerous.
Since you can't define benevolence formally, it's not possible to
build a chain of logic giving information about benevolence in any
meaningful way. (Even in a really limited formal system like chess
proofs are pure toilet paper).
> I also think that looking to evolutionary biology for guidance about
> superhuman AI's is a mistake, BTW.
I'm never arguing about motivations of superhuman AIs but only deriving
some very loose constraints upon a population of postbiological beings
emerged locally, and then radiated/speciated, some of them superhuman,
some dumb as dirt, operating in this solar system, using physical
laws as we know them. Having this said, evolutionary biology does give
us some answers. Unless you're proposing an alternative theory with
a better success track, there's yet no point in abandoning this
particular (cracked, blind, astigmatic) crystall ball.
I'm curious why you think evolutionary theory (a superset of game theory)
and usual physics are not applicable to a population of postbiological
critters. You must have reasons for your position.
> This thread began as a discussion of whether or not rationality rules
> out a certain attitude toward the preservation of human life.
My flavor of rationality does. Perhaps I should switch to Coke, though.
> I don't find it accurate to say that I'm fixated on rationality,
> though I do consider it important.
I'm considering it very important personally; but also only one strategy of many.
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 191 bytes
Desc: Digital signature
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20061006/42418490/attachment.bin>
More information about the extropy-chat
mailing list