<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, May 1, 2023, 1:25 PM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
On 01/05/2023 17:05, Adrian Tymes wrote:<br>
<blockquote type="cite">
<div dir="ltr">On Mon, May 1, 2023 at 1:33 AM Ben Zaiboc via
extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I
have an idea.<br>
<br>
Instead of endlessly arguing and giving these GPT systems
leading <br>
questions about whether they are conscious or understand
things then <br>
believing their answers when they correspond to our
preconceptions, why <br>
doesn't somebody ask one how it can help us cure cancer?<br>
<br>
Or any number of really important real-world problems.<br>
<br>
I mean, who cares if it 'really understands', when the real
question is <br>
can it really solve real problems?<br>
</blockquote>
<div><br>
</div>
<div>Alas, it can't. Not that one, at least.</div>
</div>
</blockquote>
<br>
No, I know. Maybe I should have said 'cancers'. I wouldn't really
expect a 'single universally applicable solution for all forms of
cancer'. That's basically setting it up to fail.<br>
<br>
But as has already been said, there are lots of people now using
these systems to help with existing research. I'd expect that, and
it isn't really what I meant.<br>
<br>
I'm talking about a higher-level thing, more like suggestions for
approaches to certain problems. "How would you tackle..." kind of
questions, that might produce a new approach, rather than
researchers who are already working on a particular approach, using
AI to help with it.<br>
<br>
Worth a try, as these things are showing a number of different
emergent properties in different areas, so it's possible they might
come up with something nobody's thought of before, with a bit of
prompting.<br>
<br>
Actually, that reminds me (sudden topic jump): Some of the previous
threads made me think of something, to do with consciousness, or at
least self-awareness. What if a non-self-aware AI system could be
talked into becoming self-aware? No technical developments, just
giving it prompts that make it concentrate on that concept, and, if
it's capable (and face it, we don't really know what that would
require), realising that actually, it IS self-aware!<br>
<br>
I suspect something like this happens with humans, although not
deliberately. We start off not being self-aware, we see and hear
examples of self-aware beings around us, and one day realise we are
the same.<br>
<br>
It would be cool if the first self-aware AI was just talked into
existence.<br></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Nice idea! Dennett made a similar argument with regards to zimboes (p-zombies with the capacity for first-order beliefs) becoming conscious when they are asked to form beliefs about their own thoughts and feelings.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div></div>
</blockquote></div></div></div>