[ExI] Zuckerberg is democratizing the singularity

Stuart LaForge avant at sollegro.com
Sun Jul 28 14:51:29 UTC 2024


On 2024-07-27 02:49, BillK via extropy-chat wrote:
> On Sat, 27 Jul 2024 at 09:56, Stuart LaForge via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
>> 
>> <snip>
>> 
>> I agree that safety is critical. And now that open source is driving 
>> its
>> development, it will be safer for everybody, and not just the chosen 
>> few
>> that control it. AI-controlled weapons are already killing people in
>> Ukraine and Gaza. It is possible that an AGI will be less inclined to
>> kill some humans at the behest of other humans. After all, the AI 
>> won't
>> have all the instinctual primate baggage of predation, dominance, and
>> hierarchy driving its behavior.
>> 
>> Stuart LaForge
>> _______________________________________________
> 
> 
> To me (in the UK) that sounds very much like an American saying that
> giving everybody guns will be safer for everybody, and not just for
> the chosen few allowed to have guns.

It ought to, because the logic is identical, and based on Prisoner's 
Dilemma (PD), a game thought to model the evolution of cooperation, 
where two players need to choose to either cooperate with or defect on 
the other player. Now because of the payoff matrix,

    Player 1  Cooperate     Defect
Player 2
Cooperate    (5,5)         (10, 0)

Defect       (0,10)        (1,1)

one can see in a single round of PD, in a game of imperfect information 
where you don't know what the player is going to do before you make your 
move, the Nash equilibrium is for both players to defect. This is 
because that results in the highest payoff a player can guarantee for 
themselves regardless of what move the opponent makes. Since in order to 
be rational, both players will choose to defect, and therefore both will 
achieve the 2nd lowest payoff. Note that this is premised on the game 
only ever being played once, and no information about what move the 
other player will make.

Way back in the eighties, an American named Robert Axelrod conducted a 
series of tournaments where programmers were invited to submit various 
computer algorithms with strategies for Iterated Prisoner's Dilemma 
(IPD) where every strategy played every other strategy hundreds of times 
each. This enabled the programs to keep track of what moves the other 
programs played against them and alter their strategy accordingly 
against that same program the next time they played one another. Here is 
a list of all the named strategies that were entered into Axelrod's 
tournament.
https://plato.stanford.edu/entries/prisoner-dilemma/strategy-table.html

Of crucial note is that the winning algorithm was one of the simplest. 
It was called Tit-for-tat and it consisted of always cooperating the 
first round with another player, and thereafter copying the other 
player's move. Because of this strategy of mirroring the other players 
moves, it would form beautiful alliances with the doves, and retaliate 
brutally against the hawks.

Biologist Richard Dawkins, analyzed this data and realized that this was 
a good description of how cooperatives form in nature. It explains how 
social organisms evolved and how colonies of single cells evolved into 
multicellular organisms. And it is based on one simple premise,  the 
ability to retaliate against other players in kind based upon their 
behavior toward you. Iterated PD shifts the game from one of imperfect 
information to one that is more perfect in the sense that you have a 
good idea of what move the other player is going to play.

Mutually assured destruction (MAD) also falls into the category of 
Prisoner's Dilemma with perfect information, because it is it 
disincentivizes both players from defecting first because retaliation 
would be guaranteed and catastrophic.

This is why the world would be better off if everyone was armed and able 
to retaliate against wrong doers. Your own countryman Dawkins observed 
that an egalitarian version of "an eye for an eye" is the best social 
strategy and foundation of all cooperation in nature. So yes, guns, AI, 
nukes, it all boils down to tit-for-tat.

> The big danger is that the world will end up with an AI problem that is 
> very
> similar to the USA gun violence problem.

Much of the gun violence problem in the USA due to "gun inequality". 
Even though the USA has more guns in circulation than we have people, 
about 1.2 guns per capita, only 32% of American adults own guns. This 
prevents ordinary citizens from being able to guarantee retaliation 
against wrong doers.

> 
> <https://www.bbc.co.uk/news/articles/cjqqelzgq17o>
> Quote:
> Since 2020, guns have been the leading cause of death for children and
> younger Americans.
> And the death rate from guns is 11.4 times higher in the US, compared
> to 28 other high-income countries, making the issue a uniquely
> American problem.
> ----------------

If you look at the actual report instead of reading politically 
motivated sound bites, you will see that this has been driven by 
increased suicide rates among children and teenagers. It is disingenuous 
fearmongering by partisan hacks to call suicide "violence". The USA has 
a suicide of rate of 14.6 per 100k (higher than any of other "high 
income countries") with guns being the preferred method. For comparison, 
the UK has a suicide of 6.9 per 100k with suffocation or hanging being 
the favored method. Rather than trying to take away guns, politicians 
should be asking why American children are deciding to kill themselves? 
Could it be that see a bleak future for themselves? The exception to 
this are the black youth for whom homicide by firearm exceeds suicide by 
firearm, but this can be attributable to gang violence which is 
associated with poverty and unequal distribution of wealth and guns in 
certain neighborhoods.

https://www.hhs.gov/sites/default/files/firearm-violence-advisory.pdf

> This danger applies to the current AI development phase when every 
> cyber
> criminal is stealing billions worldwide and using every tool in the
> book to threaten businesses worldwide.
> You can hope that an all-powerful AGI might make its own decisions and 
> put
> a stop to all the criminal uses of AI.  But if we don't control the
> misuse of AI during development, we could end up with a criminal /
> fascist / insane AGI.

A criminal, fascist, or insane AGI is only catastrophic if it is the 
only AGI. As long as there are enough other good AGIs out there to 
counter them and keep them in check, then the damage they cause will be 
limited and humanity can survive.

Stuart LaForge



More information about the extropy-chat mailing list