[ExI] [Extropolis] Old and new futurisms in Silicon Valley

Darin Sunley dsunley at gmail.com
Sat Jan 20 21:30:03 UTC 2024


I agree with this emphatically. This is precisely what's going on. A lot of
people in Red states do indeed perceive themselves and their families to
have a bleak future. This is indeed triggering intense xenophobia. If they
felt happy and prosperous, this feeling would indeed dissipate.

The Progressive Left's response to this situation, of course, is to use all
of the power at their disposal, which is not insignificant at this point,
to triple down on the policies that have generated this situation, and to
talk openly about punishing and disenfranchising the Red states and their
people.

On Fri, Jan 19, 2024 at 2:42 PM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> While I agree with your concerns, I think the supporters of Trump are
> more of a problem.  They are what makes him a powerful person.  The
> analogy with Hitler and his supporters if valid.
>
> This is a population-scale phenomenon.  I think it is rooted in
> psychological traits that were selected due to repeated population
> expansions and resource crises that most of the human race experienced
> over the past 100,000 years.  (Exception being the San.)
>
> For reasons I don't fully understand, a lot of people in red states
> think they are facing a bleak future.  Perhaps they are justified, a
> lot of jobs were wiped out by technological innovation, and many more
> were moved to China because of the Harvard Business School policy of
> profit to the shareholders above any other considerations.  Another
> factor is the high cost of education.  Still another is the high cost
> of medical care.
>
> People have been selected for psychological traits leading to wars.
> The first response to a perception of a bleak future is a higher gain
> in the circulation of xenophobic or outright crazy memes (QAnon for
> example).  In the Stone Age, this dehumanized the neighbors in
> preparation for killing them for their resources.  (In times of plenty
> your group swapped wives with them.)
>
> This process toward war does not have to go to an actual war, it could
> stall at the crazy meme stage.  It could also back off the way the IRA
> lost support as the economy improved the income per capita as the
> Irish women cut back the number of children they had.
>
> By this model, Trump would lose support if the MAGA crowd perceived a
> brighter future.  How to accomplish that is a good question.  Perhaps
> we should quiz the AIs.
>
> Keith
>
> PS Large-scale social (religious) movements are well known.
> https://en.wikipedia.org/wiki/Great_Awakening
>
> On Fri, Jan 19, 2024 at 11:04 AM John Clark <johnkclark at gmail.com> wrote:
> >
> > I watched the video at https://www.youtube.com/watch?v=sdjMoykqxys,  I
> strongly agree with everything Max More said with one exception, his
> skepticism of the Singularity. I think, not a proof but, a strong case can
> be made for the Singularity and I will try to do so now. We know for a fact
> that the human genome is only 750 MB long  (it contains 3 billion base
> pairs, there are 4 bases, so each base can represent 2 bits, and there are
> 8 bits per byte)  and we know for a fact it contains a vast amount of
> redundancy and gibberish (for example many thousands of repetitions of
> ACGACGACGACG) and we know it contains the recipe for an entire human body,
> not just the brain, so the technique the human mind uses to extract
> information from the environment must be pretty simple, VASTLY less than
> 750 MB.  I’m not saying an AI must use that exact same algorithm that
> humans use, they may have found an even simpler one,  but it does tell us
> that such a simple thing must exist, 750 MB is just the upper bound, the
> true number must be much much less. So even though this AI seed algorithm
> would require a smaller file size than a medium quality JPEG, it enabled
> Albert Einstein to go from understanding precisely nothing in 1879 to being
> the first man to understand General Relativity in 1915. And once a machine
> discovers such an algorithm then like it or not the world will start to
> change at an exponential rate.
> >
> > So we can be as certain as we can be certain of anything that it should
> be possible to build a seed AI that can grow from knowing nothing to being
> super-intelligent, and the recipe for building such a thing must be less
> than 750 MB, a LOT less. For this reason I never thought a major scientific
> breakthrough was necessary to achieve AI, just improved engineering, but I
> didn't know how much improvement would be necessary; however about a year
> ago a computer was able to easily pass the Turing test so today I think I
> do. That's why I say a strong case could be made that the Singularity is
> not only likely to happen it is likely to happen sometime within the next
> five years, and that's why I'm so terrified of the possibility that during
> this hyper critical time for the human species the most powerful human
> being on the face of the planet will be an anti-science, anti-free market,
> wannabe dictator with the emotional and mental makeup of an overly pampered
> nine-year-old brat who probably can't even spell AI.
> >
> > John K Clark
> >
> >>
> > --
> > You received this message because you are subscribed to the Google
> Groups "extropolis" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to extropolis+unsubscribe at googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/extropolis/CAJPayv2diRbDfcYNT2KRAHoLDSM7F4jux%3DrfEywNDhycn%2BS2oQ%40mail.gmail.com
> .
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240120/f3ec7445/attachment.htm>


More information about the extropy-chat mailing list