[ExI] Dr. GPT, Problem-solver

Jason Resch jasonresch at gmail.com
Mon May 1 20:15:20 UTC 2023


On Mon, May 1, 2023, 1:25 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 01/05/2023 17:05, Adrian Tymes wrote:
>
> On Mon, May 1, 2023 at 1:33 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I have an idea.
>>
>> Instead of endlessly arguing and giving these GPT systems leading
>> questions about whether they are conscious or understand things then
>> believing their answers when they correspond to our preconceptions, why
>> doesn't somebody ask one how it can help us cure cancer?
>>
>> Or any number of really important real-world problems.
>>
>> I mean, who cares if it 'really understands', when the real question is
>> can it really solve real problems?
>>
>
> Alas, it can't.  Not that one, at least.
>
>
> No, I know. Maybe I should have said 'cancers'. I wouldn't really expect a
> 'single universally applicable solution for all forms of cancer'. That's
> basically setting it up to fail.
>
> But as has already been said, there are lots of people now using these
> systems to help with existing research. I'd expect that, and it isn't
> really what I meant.
>
> I'm talking about a higher-level thing, more like suggestions for
> approaches to certain problems. "How would you tackle..." kind of
> questions, that might produce a new approach, rather than researchers who
> are already working on a particular approach, using AI to help with it.
>
> Worth a try, as these things are showing a number of different emergent
> properties in different areas, so it's possible they might come up with
> something nobody's thought of before, with a bit of prompting.
>
> Actually, that reminds me (sudden topic jump): Some of the previous
> threads made me think of something, to do with consciousness, or at least
> self-awareness. What if a non-self-aware AI system could be talked into
> becoming self-aware? No technical developments, just giving it prompts that
> make it concentrate on that concept, and, if it's capable (and face it, we
> don't really know what that would require), realising that actually, it IS
> self-aware!
>
> I suspect something like this happens with humans, although not
> deliberately. We start off not being self-aware, we see and hear examples
> of self-aware beings around us, and one day realise we are the same.
>
> It would be cool if the first self-aware AI was just talked into existence.
>

Nice idea! Dennett made a similar argument with regards to zimboes
(p-zombies with the capacity for first-order beliefs) becoming conscious
when they are asked to form beliefs about their own thoughts and feelings.

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230501/c576b4f2/attachment.htm>


More information about the extropy-chat mailing list