[ExI] [Extropolis] What should we do if AI becomes conscious?
efc at disroot.org
efc at disroot.org
Sun Dec 15 11:16:33 UTC 2024
I do agree that the terminilogy is fuzzy. But I don't think it takes away
from the question.
What would I do if my AI project became conscious?
It of course depends on the circumstances (is it connected to the internet
or not, does it seems nice, evil manipulative etc.) but my first instinct
would be to learn and interact.
I would be very interested in what conclusions it would reach when it
comes to the soul, afterlife, ethics etc.
Since it would be a created "artificial" consciousness, who could talk to
its creator, I wonder how that would affects its conclusions when it comes
to the eternal questions of philosophy.
But then again, it would depend a lot on the circumstances and how it
behaves and reacts.
Best regards,
Daniel
On Sat, 14 Dec 2024, Darin Sunley via extropy-chat wrote:
> "What if an AI becomes conscious?" is the wrong question. Not least because on a deep and fundamental level, there's no way to know.
> I can't even know in a deep way that other human beings are conscious the way I am.
> The question of "consciousness" contains and subsumes several more important and relevant questions.
>
> The first and foremost being "What happens when AIs become agentic?". This will happen as soon as Anthropic, OpenAI, or Microsoft
> decide their current works-in-progress are ready for wide release, but at the bleeding edge, they're already there.
>
> Right behind that is "What if agentic AIs become both hostile and deceptive?" This is where things get interesting, in the Chinese
> curse sense of the term.
>
> The third, but arguably paling into unimportance compared to the first two is "What if AIs are moral patients (per Singer) - proper
> objects or moral consideration?" The realistic answer is probably nothing. It takes a lifetime of acculturation for humans to even
> consider small furry animals whose cries sound like human infants to be moral patients. Ain't no unmodified human ever gonna think of
> several thousand blade servers running impenetrable neural net inferences in the form of astronomically-sized matrix multiplications
> as a moral patient.
>
> On Sat, Dec 14, 2024 at 2:04 PM Keith Henson via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> On Sat, Dec 14, 2024 at 11:54 AM Brent Allsop <brent.allsop at gmail.com> wrote:
> >
> > Does this work? Does anyone use this?
> > If so, I might like to try it.
>
> I have been using Nicotinamide Riboside for more than 10 years.
> Thousands of references on the net.
>
> Keith
>
>
>
>
> >
> > On Sat, Dec 14, 2024 at 8:05 AM Keith Henson <hkeithhenson at gmail.com> wrote:
> >>
> >> I never really got qualia. But here it is.
> >> https://www.qualialife.com/shop/qualia-nad
> >>
> >> Free shipping too. :-)
> >>
> >> Keith
> >>
> >>
> >> On Fri, Dec 13, 2024 at 6:39 PM Brent Allsop <brent.allsop at gmail.com> wrote:
> >> >
> >> >
> >> > Hi John,
> >> > Subjective binding enables us to directly (infallibly) experience more than one quality in a unified experience.
> >> > This enables the left hemisphere of your brain to directly (infallibly) experience qualities in the other
> hemisphere.
> >> > In other words, the left hemisphere of your brain knows absolutely, not only that it is not the only conscious
> hemisphere, but it knows what the qualities in the other hemisphere are like.
> >> >
> >> > Why would we not be able to do this same thing between brains with neural ponytails?
> >> >
> >> >
> >> > On Thu, Dec 12, 2024 at 3:22 PM John Clark <johnkclark at gmail.com> wrote:
> >> >>
> >> >> On Thu, Dec 12, 2024 at 5:12 PM Brent Allsop <brent.allsop at gmail.com> wrote:
> >> >>
> >> >>
> >> >>> > We will know, absolutely, not only what is and isn't conscious,
> >> >>
> >> >>
> >> >> What's with this "we" business? I know for a fact that I'm conscious, you might be conscious but I can't be
> certain.
> >> >>
> >> >> John K Clark
> >> >>
> >> >>
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Thu, Dec 12, 2024 at 10:35 AM Lawrence Crowell <goldenfieldquaternions at gmail.com> wrote:
> >> >>>>
> >> >>>> How would we really know that it has become conscious, even if it assumes consciousness?
> >> >>>>
> >> >>>> LC
> >> >>>>
> >> >>>> On Thu, Dec 12, 2024 at 8:28 AM John Clark <johnkclark at gmail.com> wrote:
> >> >>>>>
> >> >>>>>
> >> >>>>> What should we do if AI becomes conscious? These scientists say it’s time for a plan
> >> >>>>>
> >> >>>>> John K Clark See what's on my new list at Extropolis
> >> >>>>> r0d
> >> >>>>>
> >> >>>>> --
> >> >>>>> You received this message because you are subscribed to the Google Groups "extropolis" group.
> >> >>>>> To unsubscribe from this group and stop receiving emails from it, send an email to
> extropolis+unsubscribe at googlegroups.com.
> >> >>>>> To view this discussion visit
> https://groups.google.com/d/msgid/extropolis/CAJPayv1k9YM7%2B4A8Zb69N_ubCKuvAtLJMo1gxkWPvTqmFi%3DX4w%40mail.gmail.com.
> >> >>>>
> >> >>>> --
> >> >>>> You received this message because you are subscribed to the Google Groups "extropolis" group.
> >> >>>> To unsubscribe from this group and stop receiving emails from it, send an email to
> extropolis+unsubscribe at googlegroups.com.
> >> >>>> To view this discussion visit
> https://groups.google.com/d/msgid/extropolis/CAAFA0qoThkF7Oym6JJkomBHwGF-PiXsQaoOnyTy4sepHRUNM6Q%40mail.gmail.com.
> >> >>>
> >> >>> --
> >> >>> You received this message because you are subscribed to the Google Groups "extropolis" group.
> >> >>> To unsubscribe from this group and stop receiving emails from it, send an email to
> extropolis+unsubscribe at googlegroups.com.
> >> >>> To view this discussion visit
> https://groups.google.com/d/msgid/extropolis/CAK7-onvR0JU_sK%2BWFJOQON6pgG0Q3UpWsvK8Sb42j5rRikd6Xg%40mail.gmail.com.
> >> >>
> >> >> --
> >> >> You received this message because you are subscribed to the Google Groups "extropolis" group.
> >> >> To unsubscribe from this group and stop receiving emails from it, send an email to
> extropolis+unsubscribe at googlegroups.com.
> >> >> To view this discussion visit
> https://groups.google.com/d/msgid/extropolis/CAJPayv0MU3LE1DiKEHTqLPfYn3ubsOF9ov7hQyfAL9JgOawTwg%40mail.gmail.com.
> >> >
> >> > --
> >> > You received this message because you are subscribed to the Google Groups "extropolis" group.
> >> > To unsubscribe from this group and stop receiving emails from it, send an email to
> extropolis+unsubscribe at googlegroups.com.
> >> > To view this discussion visit
> https://groups.google.com/d/msgid/extropolis/CAK7-ons3%3DONhJ0ic-w1FvfSONtHrBTvZpbXz2BCwc8s2sEJsaw%40mail.gmail.com.
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups "extropolis" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an email to
> extropolis+unsubscribe at googlegroups.com.
> >> To view this discussion visit
> https://groups.google.com/d/msgid/extropolis/CAPiwVB7s34zm%2BDoZ3P8d4-yFFbCt79h0UAzCL6ZLccPr%2BR5JZQ%40mail.gmail.com.
> >
> > --
> > You received this message because you are subscribed to the Google Groups "extropolis" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to
> extropolis+unsubscribe at googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/extropolis/CAK7-onvFB-tmo6NJQL-UFam8zVM2qUGZzD71YLj-D78pmv%3DF2A%40mail.gmail.com.
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
>
More information about the extropy-chat
mailing list