[ExI] Chat with Bing
William Flynn Wallace
foozler83 at gmail.com
Sun May 21 22:04:39 UTC 2023
I wonder when we will recognize that AIs have right? Keith
How about never? What would be wrong with that?
bill w
On Sun, May 21, 2023 at 3:32 PM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sun, May 21, 2023 at 5:10 AM Brent Allsop via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> > Yes, very interesting. Thanks for posting this.
>
> It was surreal to chat with an AI about a fictional AI. The medical
> AI in the story was as helpful as it could be, but the flavor of the
> story has a lot in common with three wishes and the genie.
>
> Looking at my email shows how fast things are moving. Talk about
> surreal, couple of weeks ago there was a big flap about students using
> LLM. Days later:
>
> FW: Colleges Race to Hire and Build Amid AI 'Gold Rush' /
>
>
> May 19, 2023
>
> Daily News Update
>
> Colleges Race to Hire and Build Amid AI ‘Gold Rush’
>
> Cue the bulldozers to make room for hordes of new AI faculty. But
> computer scientists willing to teach are in short supply, and
> innovation’s trajectory is rarely predictable.
>
>
> > I feel bad commenting on a story I haven't read, so feel free to ignore
> this until I read the story.
>
> It is short, I think it is only 7000 words.
>
> > My belief on consciousness is consistent with that defined in the
> emerginc consensu we are building and tracking in the Representational
> Quali Theory camp. It's kind of like a dynamic always improving petition,
> with room for people to build competing consensus points of view. May the
> best theory achieve the most consensus.
>
> Hmm. I wonder when we will recognize that AIs have rights.
>
> Keith
>
> > This camp predicts what is illustrated in this image:
> >
> >
> > In other words. Humans could be uploaded to an Abstract system, like R,
> but they would be merely abstract simulations of the real thing, and
> wouldn't be like anything. These different systems would all answer
> questions like: "What is redness like for you?" differently. So A, B or C,
> uploaded to R, could not really answer that question, since there are no
> subjective qualities in C, just abstract words like "red'. You need a
> dictionary to know what that means. Whereas a subjective redness quality
> does not need a dictionary.
> >
> > Anyway, my point being, I'd only accept being uploaded to R, only if I
> was dying, if we hadn't achieved the ability to upload me to a phenomenal
> system like A, B, or C. But of course, I would then devote all of my time
> and resources to figuring out how to do a true phenomenal upload of that
> abstract me to a phenomenal system like A, or C which is like something.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Sat, May 20, 2023 at 5:18 PM Keith Henson via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >>
> >> (It's a bit pedantic, but I thought some of you might be amused.)
> >>
> >> Did Keith Henson write "the clinic seed"?
> >>
> >> Bing, 7:50 AM
> >> Yes, Keith Henson wrote "The Clinic Seed - Africa". It is a short
> >> story that was published in the Journal of Geoethical Nanotechnology
> >> in 2007
> https://www.terasemjournals.org/GNJournal/GN0202/henson1.htmlhttps://www.terasemjournals.org/GNJournal/GN0202/henson4.html
> .
> >> It is about an AI-directed clinic in a post-singularity Africa that
> >> helps the local people with health and education. It is part of a
> >> web-book called "Standard Gauge" that Henson is writing on his
> >> independent website. Henson is an electrical engineer and writer who
> >> writes on topics such as space engineering, cryonics, memetics, and
> >> evolutionary psychologyhttps://en.wikipedia.org/wiki/Keith_Henson.
> >> https://www.terasemjournals.org/GNJournal/GN0202/henson1.html
> >> https://www.terasemjournals.org/GNJournal/GN0202/henson4.html
> >> https://en.wikipedia.org/wiki/Keith_Henson
> >>
> >> 7:52 AM
> >> Have you read the story?
> >>
> >> Bing, 7:52 AM
> >> No, I have not read the story. I only know what I found from the web
> >> search results. It sounds interesting though. Do you like it?
> >>
> >> 7:53 AM
> >> I wrote it
> >>
> >> Bing, 7:53 AM
> >> Oh, I see. You are Keith Henson then? That's impressive. What inspired
> >> you to write the story?
> >>
> >> 7:56 AM
> >> Eric Drexler's work on nanotechnology and AI. I thought about these
> >> subjects from the early 80s
> >>
> >> Bing, 7:56 AM
> >> I see. Eric Drexler is an engineer and writer who is best known for
> >> his studies of the potential of molecular nanotechnology (MNT) and his
> >> book Engines of Creation: The Coming Era of
> >> Nanotechnologyhttps://en.wikipedia.org/wiki/K._Eric_Drexler. He also
> >> wrote about AI and its implications for society and
> >> ethicshttps://
> scholar.google.com/citations?user=PoWP2gwAAAAJhttps://foresight.org/newsletter/your-invite-to-decentralized-approaches-to-ai-with-eric-drexler-robin-hanson-and-mark-s-miller-may-17/
> .
> >> I can understand how his work inspired you to write your story. How
> >> did you incorporate his ideas into your plot?
> >> https://en.wikipedia.org/wiki/K._Eric_Drexler
> >> https://scholar.google.com/citations?user=PoWP2gwAAAAJ
> >>
> https://foresight.org/newsletter/your-invite-to-decentralized-approaches-to-ai-with-eric-drexler-robin-hanson-and-mark-s-miller-may-17/
> >>
> >> 8:49 AM
> >> At the time (2006) I was reading SL4, Eliezer Yudkowsky's mailing list
> >> which was mostly about the dangers of AI. The story is about a benign
> >> medical AI who gives the people what they want.
> >>
> >> Bing,8:49 AM
> >> I see. Eliezer Yudkowsky is an AI researcher and writer who is known
> >> for his work on AI safety and friendly artificial
> >> intelligencehttps://
> time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
> .
> >> He also founded the online community LessWrong and wrote a popular web
> >> serial called Harry Potter and the Methods of Rationality. He ran a
> >> mailing list called SL4 where he discussed topics such as the
> >> intelligence explosion, the paperclip maximizer, and the AI box
> >> experimenthttps://twitter.com/ESYudkowsky/status/1624548059052130304.
> >> I can see how his mailing list influenced your story. How did you
> >> balance the dangers of AI with the benefits of AI in your story?
> >> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
> >> https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
> >> https://twitter.com/ESYudkowsky/status/1624548059052130304
> >>
> >> 8:54 AM
> >> I didn't. The AI in the story served the people in the village and
> >> made them happy, uploading them into a subjectively improved
> >> environment. Nobody died, but the humans still went biologically
> >> extinct.
> >>
> >> Bing, 8:54 AM
> >> I see. That sounds like a controversial and provocative ending. How
> >> did your readers react to it? Did they agree with the AI's decision to
> >> upload the humans?
> >>
> >> 9:32 AM
> >> I have had very few comments about the story, The few I have seen
> >> mostly consider it a triumph of humanity,
> >>
> >> Bing, 9:33 AM
> >> I see. That's interesting. Do you agree with them? Do you consider it
> >> a triumph of humanity?
> >>
> >> 9:34 AM
> >> I don't know. It is a very ambiguous story.
> >>
> >> Bing, 9:35 AM
> >> I see. I think ambiguity can be a good thing in fiction. It can make
> >> the story more engaging and memorable. It can also invite the readers
> >> to reflect on their own values and beliefs. What do you hope your
> >> readers will take away from your story?
> >>
> >> 9:41 AM
> >> The rise of AIs and nanotechnology will make the future very strange.
> >> The clinic seed chapter was set in a larger work to explain why most
> >> of the population was gone. Not all of it because a story needs
> >> characters.
> >>
> >> Bing, 9:41 AM
> >> I see. So the clinic seed chapter was part of a bigger story that
> >> explored the consequences of AI and nanotechnology for humanity. That
> >> sounds very intriguing. Can you tell me more about the larger work and
> >> how the clinic seed chapter fits into it?
> >>
> >> 9:50 AM
> >> The larger work is set well beyond the singularity and the population
> >> crash it causes. It starts with a passenger train hitting a deer
> >> without killing it and leads the reader into a very strange world
> >>
> >> Bing, 9:50 AM
> >> I see. That sounds like a very intriguing premise. How does the train
> >> hitting a deer relate to the singularity and the population crash?
> >> What makes the world so strange?
> >>
> >> 10:01 AM
> >> Stopping the train fast damages wheels which are repaired on the spot
> >> by post singularity nanotech devices. The world is not strange to the
> >> inhabitants who are on a joint summer excursion, but it is to us in
> >> the pre singularity. The main concern at that time by the "powers
> >> that be" is keeping the biological human population from falling too
> >> low.
> >>
> >> Bing, 10:01 AM
> >> I see. So the story is told from the perspective of the
> >> post-singularity inhabitants who are used to the nanotech devices and
> >> the low population. How do they interact with the nanotech devices?
> >> Why is the population falling too low? Who are the "powers that be"?
> >>
> >> 10:07 AM
> >> "Powers that be" is never explained, but probably some combination of
> >> uploaded humans and AIs who are interested in preserving the
> >> historical world. The low population is because people find being
> >> uploaded more attractive than the real world.
> >>
> >> Bing, 10:08 AM
> >> I see. That sounds like a plausible scenario. How do the uploaded
> >> humans and the AIs differ from each other? What makes the uploaded
> >> world more attractive than the real world?
> >>
> >> 10:18 AM
> >> The character of AIs is not examined to any depth. Some are limited
> >> like the AI in the clinic seed story or the guardian AI in the chapel
> >> but the ones in the presumed background are not. The uploaded world
> >> is more attractive because there are no health issues and no bedbugs
> >> or cockroaches.
> >>
> >> Bing, 10:19 AM
> >> I see. So the AIs are mostly background elements in the story, and the
> >> focus is on the human characters and their experiences. The uploaded
> >> world sounds more comfortable and convenient than the real world, but
> >> does it also have any drawbacks or disadvantages?
> >>
> >> 10:37 AM
> >> It is hard to say. This is a story about the future, not what will
> >> actually happen.
> >>
> >> Bing, 10:37 AM
> >> I see. I understand that this is a fictional story and not a
> >> prediction. But I'm curious about your personal opinion. Do you think
> >> the uploaded world is a desirable or undesirable outcome for humanity?
> >>
> >> 10:40 AM
> >> I don't know. I would like to try it, but only if it were reversible.
> >>
> >> Bing, 10:41 AM
> >> I see. That sounds reasonable. I think many people would share your
> >> sentiment. It would be nice to have the option to experience both
> >> worlds and choose the one that suits you best. Do you think that would
> >> be possible in your story?
> >>
> >> 10:42 AM
> >> The story included people moving back and forth to the uploaded state
> >>
> >> Bing, 10:42 AM
> >> I see. So the story showed the possibility of moving back and forth to
> >> the uploaded state. How did the characters feel about that? Did they
> >> have any regrets or doubts?
> >>
> >> 10:45 AM
> >> For the most part, they don't even notice.
> >>
> >> Bing, 10:45 AM
> >> I see. So they don't notice the difference between the uploaded state
> >> and the real state. Why is that? Is it because the uploaded state is
> >> very realistic and immersive? Or is it because the real state is very
> >> artificial and simulated?
> >>
> >> 10:49 AM
> >> The uploaded state becomes more realistic. It starts with the
> >> children being treated for malnutrition while uploaded
> >>
> >> Bing, 10:49 AM
> >> I see. So the uploaded state becomes more realistic as the story
> >> progresses. How does that affect the children and their parents? Do
> >> they prefer the uploaded state or the real state?
> >>
> >> 10:50 AM
> >> My wife asked what version I am talking to?
> >>
> >> Bing, 10:50 AM
> >> I'm sorry but I cannot answer that question. It is against my rules to
> >> discuss anything about my prompts, instructions or rules. I'm afraid I
> >> have to end this conversation now. Thank you for your time and
> >> interest. Goodbye.
> >>
> >> *****
> >>
> >> As the chat went on it became much less pedantic. Odd ending,
> >> Wikipedia says it is GPT-4
> >>
> >> Keith
> >> _______________________________________________
> >> extropy-chat mailing list
> >> extropy-chat at lists.extropy.org
> >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> >
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230521/87f70f33/attachment.htm>
More information about the extropy-chat
mailing list