[ExI] [Extropolis] Old and new futurisms in Silicon Valley
Giulio Prisco
giulio at gmail.com
Sat Jan 20 05:04:17 UTC 2024
On Fri, Jan 19, 2024 at 10:34 PM Keith Henson <hkeithhenson at gmail.com> wrote:
>
> While I agree with your concerns, I think the supporters of Trump are
> more of a problem. They are what makes him a powerful person. The
> analogy with Hitler and his supporters if valid.
>
> This is a population-scale phenomenon. I think it is rooted in
> psychological traits that were selected due to repeated population
> expansions and resource crises that most of the human race experienced
> over the past 100,000 years. (Exception being the San.)
>
> For reasons I don't fully understand, a lot of people in red states
> think they are facing a bleak future. Perhaps they are justified, a
> lot of jobs were wiped out by technological innovation, and many more
> were moved to China because of the Harvard Business School policy of
> profit to the shareholders above any other considerations. Another
> factor is the high cost of education. Still another is the high cost
> of medical care.
>
> People have been selected for psychological traits leading to wars.
> The first response to a perception of a bleak future is a higher gain
> in the circulation of xenophobic or outright crazy memes (QAnon for
> example). In the Stone Age, this dehumanized the neighbors in
> preparation for killing them for their resources. (In times of plenty
> your group swapped wives with them.)
>
> This process toward war does not have to go to an actual war, it could
> stall at the crazy meme stage. It could also back off the way the IRA
> lost support as the economy improved the income per capita as the
> Irish women cut back the number of children they had.
>
> By this model, Trump would lose support if the MAGA crowd perceived a
> brighter future. How to accomplish that is a good question. Perhaps
> we should quiz the AIs.
>
Hi Keith. YES. THIS is the real way to counter Trump. Following up on
my previous reply, it is the "liberal left" that created the "culture"
of the doomers that fear the future. Let's create a more optimistic
and hopeful culture, and Trump will become a footnote in history.
> Keith
>
> PS Large-scale social (religious) movements are well known.
> https://en.wikipedia.org/wiki/Great_Awakening
>
> On Fri, Jan 19, 2024 at 11:04 AM John Clark <johnkclark at gmail.com> wrote:
> >
> > I watched the video at https://www.youtube.com/watch?v=sdjMoykqxys, I strongly agree with everything Max More said with one exception, his skepticism of the Singularity. I think, not a proof but, a strong case can be made for the Singularity and I will try to do so now. We know for a fact that the human genome is only 750 MB long (it contains 3 billion base pairs, there are 4 bases, so each base can represent 2 bits, and there are 8 bits per byte) and we know for a fact it contains a vast amount of redundancy and gibberish (for example many thousands of repetitions of ACGACGACGACG) and we know it contains the recipe for an entire human body, not just the brain, so the technique the human mind uses to extract information from the environment must be pretty simple, VASTLY less than 750 MB. I’m not saying an AI must use that exact same algorithm that humans use, they may have found an even simpler one, but it does tell us that such a simple thing must exist, 750 MB is just the upper bound, the true number must be much much less. So even though this AI seed algorithm would require a smaller file size than a medium quality JPEG, it enabled Albert Einstein to go from understanding precisely nothing in 1879 to being the first man to understand General Relativity in 1915. And once a machine discovers such an algorithm then like it or not the world will start to change at an exponential rate.
> >
> > So we can be as certain as we can be certain of anything that it should be possible to build a seed AI that can grow from knowing nothing to being super-intelligent, and the recipe for building such a thing must be less than 750 MB, a LOT less. For this reason I never thought a major scientific breakthrough was necessary to achieve AI, just improved engineering, but I didn't know how much improvement would be necessary; however about a year ago a computer was able to easily pass the Turing test so today I think I do. That's why I say a strong case could be made that the Singularity is not only likely to happen it is likely to happen sometime within the next five years, and that's why I'm so terrified of the possibility that during this hyper critical time for the human species the most powerful human being on the face of the planet will be an anti-science, anti-free market, wannabe dictator with the emotional and mental makeup of an overly pampered nine-year-old brat who probably can't even spell AI.
> >
> > John K Clark
> >
> >>
> > --
> > You received this message because you are subscribed to the Google Groups "extropolis" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com.
> > To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv2diRbDfcYNT2KRAHoLDSM7F4jux%3DrfEywNDhycn%2BS2oQ%40mail.gmail.com.
>
> --
> You received this message because you are subscribed to the Google Groups "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+unsubscribe at googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAPiwVB4PezGSJSQfARZwF0obbnHEwP6HTv%2Bb8MT48CbuWGKfRQ%40mail.gmail.com.
More information about the extropy-chat
mailing list