[ExI] Mono or Poly?
Adrian Tymes
atymes at gmail.com
Fri Feb 28 18:20:41 UTC 2025
On Fri, Feb 28, 2025 at 12:37 AM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Thu, Feb 27, 2025 at 1:17 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> A choice implies that someone is choosing. Who gets to make that
>> decision? Mono inherently implies that only one person or group (or AI)
>> makes that choice based on everyone, but how would they enforce that on the
>> vast majority of humanity who has never heard of them or know that there is
>> this choice to be made? Not "how do they conquer the world", but "how do
>> they become relevant to most of the world".
>>
>
> ### Conquering the world makes you very relevant to the whole world.
>
Depends on the type of conquest. Competing in niches that most people
don't even notice - directly or indirectly - can be called "conquest", but
it's more of a stretch to say why most people would care.
For example, let's say some AI cornered the market on cryptocurrency.
Cryptocurrency is a global market, so this could be said to be a form of
conquering the world. Most humans alive today simply don't use
cryptocurrency, so this would be irrelevant to most of the world.
Similar but more complex arguments apply to taking over larger sections of
finance. There is an old tale about some kid who wished to have all the
money in the world; the moment anything became "money", it was all
teleported to his vicinity. The world responded by ceasing to use money,
as it became impractical to use as a medium of exchange. Centralizing
power over finance to a single point, even in an AI, makes that point
irrelevant to the rest of the world because finance is all about parties
making independent, individual transactions.
Or take what Russia is trying in Ukraine. Sure, having your military
dominant in an area is a traditional form of "conquest". What happens when
people won't do what you demand even at gunpoint, because they know they're
more valuable to you alive - even resisting - than dead? That value has
increasingly come up over the past century, and is why military conquest
doesn't work so well any more.
There is a distinction between "conquering the world" and "becoming
relevant to most of the world" for many senses of "conquering".
> We discussed on this list many times over the years how the AI that
> achieves intelligence explosion could go about enforcing its will on us,
> this doesn't need to be belabored in detail again.
>
I remain unconvinced. The arguments tend to attribute unknowable (making
the arguments unfalsifiable) but effectively omnipotent power on the AI
merely because of intelligence explosion, basically positing God. There
are many kinds of intelligence, which these arguments tend to conflate.
Arguments that made up new powers as holes were pointed out have
historically tended to be incorrect; I see no reason to believe that this
time will be any different.
> The next few years on Earth may be pivotal to the organization of matter
>>> in this galaxy.
>>>
>>
>> That's what was said a few years ago. And a few years before that. Yet
>> we're still here and making significant decisions. Why is now any
>> different than those past claims? What will preclude us from - in, say, 5
>> or 10 years or whatever finite time horizon you care to consider - making
>> decisions that are just as significant as the ones we make today?
>>
>
> ### We are ever closer to the intelligence explosion and the signs of its
> approach are ever more obvious, that's what's different.
>
And we will be closer 10 years from now, then 10 years after that. Almost
all definitions of "few years" refer to a time span of less than 10 years.
Unless the intelligence explosion will definitely happen within the next 10
years, the statement "the next few years on Earth may be pivotal to the
organization of matter in this galaxy" is false even if the intelligence
explosion would control the organization of matter in this galaxy - which
premise relies on several other untested assumptions.
> The time of cosmic significance is upon us.
>>>
>>
>> No, the time of now is upon us. This has always been the case, and
>> confusing the immediate significance of now for a significance that will
>> still matter tomorrow underlies many such proclamations. Why is today more
>> relevant to us, than last year was to us while we were experiencing last
>> year? Why is today more relevant to us, than next year will be to us while
>> we are experiencing next year?
>>
>
> ### The day you come to a fork in the road is more significant than a
> hundred days you spent just cruising, don't you think?
>
No, because I rarely "cruise" for that long. I come across multiple
metaphorical forks every year - some lesser, some greater.
How barren is a life that only ever comes across one fork?
You imply that the fork of now is greater than any that have ever been, or
ever will be. I call bullshit, because every single time someone has
claimed that before it has been utterly wrong, for reasons that appear to
apply this time as well.
You also imply that there will effectively be no further forks - that once
an AI reaches intelligence explosion, there can be no further choices
made. This is obviously incorrect. Even if the AI shuts down all human
decision making, which basically requires exterminating the human race (to
think and decide is inherent in the conscious human experience; the only
way to stop that is to stop there from being conscious humans, which
generally requires or swiftly leads to the cessation of all living humans),
the AI itself can still decide.
Also note that "basically requires exterminating the human race". Yes,
there have been plenty of science fiction scenarios written where AI wiped
out humanity. In every case that I have read, it required more than just
the AI being smart - and in most of those cases, the AI was actually
somewhat stupid, viewing humanity as a competitor or a drain on resources
rather than a resource that could be cooperated with. (It also required
humanity to cooperate against the AI, to a degree that humanity has never
cooperated before and would be unlikely to do even in such scenarios.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250228/d3702845/attachment.htm>
More information about the extropy-chat
mailing list