[ExI] Machines of Loving Grace

Keith Henson hkeithhenson at gmail.com
Sun Oct 13 02:43:04 UTC 2024


On Sat, Oct 12, 2024 at 1:41 PM efc--- via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
snip
>
> Why do you think there would be no children?

from the story

(Sex in the simulation had no biological consequences. Producing food
out of the simulation or producing babies in the simulated world were
two built in limits Suskulan had no desire to break.)

snip

For better or for worse?

For better in that nobody died of fevers, nasty parasites, or
malnutrition since Suskulan had come into their lives.  People didn't
even die of old age with a clinic to regress age for them and they
aged in the spirit world only to the extent they wanted.

For worse in that she could not have children unless she left the
clinic for their gestation.  Zaba had read the design notes that led
up to the creation of the clinics and their spirits and had long
understood the mathematics behind Suskulan's limits.  In the long run,
births and deaths had to match.  If you wanted no deaths, then there
could be no births.

Since fetal development was arrested while in the clinic, (but not
post-birth growth) a number of families stuck it out until a child was
born, then moved back into the more attractive spirit world tata to
raise the child.

(end quote)

> Would it not be conceivable
> to "merge" to programs? If we have achieved that enormous control and
> understading of the patterns in our brain, a "merge" might be one way to
> have a child?

Possible, but that violates the no death no birth standard.

> > Thinking about it, an upload world could be somewhat like our myths
> > about heaven.
> >
> > If the light-blocking object at Tabby's Star is a data center for
> > uploaded aliens they may live where uploads reproduce.  On a
> > reasonable power budget, there could be many trillions of them in this
> > one structure and there is light dip evidence of them having spread to
> > at least 24 stars.  Communication must be a problem if they try to
> > keep in touch
> >
> > I strongly suspect that technological progress will stop and we are
> > almost at the place where we can see this in our future.
>
> Why do you think progress will stop?
>
> Personally I feel like knowledge and expanding science and technology is
> a kind of innate human need and that it will never stop. Perhaps it will
> slow down, as more and more machinery is required, and as everything
> becomes ever more specialized, but I definitely think there's a paradigm
> shift or two left to chase after.
>
If you think there is a limit to technical knowledge we will hit it
sooner or later.

Keith

> Best regards,
> Daniel
>
>
> > Keith
> >
> >> Best regards,
> >> Daniel
> >>
> >>
> >> On Fri, 11 Oct 2024, Keith Henson via extropy-chat wrote:
> >>
> >>> The Clinic Seed story discussed the good an AI operating a medical
> >>> facility can do as well as having the side effect of humans going
> >>> biologically extinct.  The problem was with the humans and nobody has
> >>> ever suggested a way to avoid the problem.
> >>>
> >>> I think I have mentioned this before, but if not, you really should
> >>> read the Rosinante books by Gilliland.  They are (in my not-so-humble
> >>> opinion) the best on both AI and space colonies ever written.  I am
> >>> not sure that he was the first to mention it, but a self-owned
> >>> corporation is a way to give Ais (who want it) human rights.
> >>>
> >>> Keith
> >>>
> >>> On Fri, Oct 11, 2024 at 2:58 PM BillK via extropy-chat
> >>> <extropy-chat at lists.extropy.org> wrote:
> >>>>
> >>>> Dario Amodei, the CEO of Anthropic (the company that has developed
> >>>> Claude AI) has written a paper that discusses the benefits that AI
> >>>> could bring to the world over the next 5 to 10 years. I thought it was
> >>>> rather impressive - assuming that the AI effects are beneficial.
> >>>> Worth a read!
> >>>>
> >>>> BillK
> >>>>
> >>>> <https://darioamodei.com/machines-of-loving-grace>
> >>>> Quote:
> >>>> Machines of Loving Grace
> >>>> How AI Could Transform the World for the Better
> >>>> Dario Amodei   October 2024
> >>>>
> >>>> Yet despite all of the concerns above, I really do think it’s
> >>>> important to discuss what a good world with powerful AI could look
> >>>> like, while doing our best to avoid the above pitfalls. In fact I
> >>>> think it is critical to have a genuinely inspiring vision of the
> >>>> future, and not just a plan to fight fires. Many of the implications
> >>>> of powerful AI are adversarial or dangerous, but at the end of it all,
> >>>> there has to be something we’re fighting for, some positive-sum
> >>>> outcome where everyone is better off, something to rally people to
> >>>> rise above their squabbles and confront the challenges ahead. Fear is
> >>>> one kind of motivator, but it’s not enough: we need hope as well.
> >>>> ----------------------------
> >>>>
> >>>> _______________________________________________
> >>>> extropy-chat mailing list
> >>>> extropy-chat at lists.extropy.org
> >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> >>>
> >>> _______________________________________________
> >>> extropy-chat mailing list
> >>> extropy-chat at lists.extropy.org
> >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat_______________________________________________
> >> extropy-chat mailing list
> >> extropy-chat at lists.extropy.org
> >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> >_______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list