[ExI] Fwd: On Large-language models
BillK
pharos at gmail.com
Fri May 19 12:06:25 UTC 2023
On Fri, 19 May 2023 at 05:07, Keith Henson via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> My wife sent me this question:
>
> > As a comment to our presentation about GPT 3.5 and 4 at this morning's
> > forum, one of my coworkers proposed a question to ask an AI: "GPT4,
> > what is the date that the machines finally take over the world?"
>
> I have Bing (GPT4) through Skype so I asked. and got a verbose reply:
<snip>
> *****
>
> GPT is not human, but it has been trained to act like it is human by humans.
> It quibbles over how soon, but does not deny that machines will at
> some point take over the world.
>
> The last sentence is pure soothing PR.
> Wow.
>
> Keith
> _______________________________________________
Hi Keith
There are many LLMs around now and the replies you get will vary
widely, depending on their training data and to what extent they have
been 'lobotomised' by their developers. It also depends on which
version of each LLM that is used. Some versions are not available to
the general public and some you have to pay for.
It also makes a difference if you use different words in asking the
same question.
In the quote you give, GPT is not really answering the question.
It is just listing various dates and quotes that it found on the
internet.
I asked a version of Bard (Google LLM) -
When will AI finally take over the world?
Answer
It is impossible to say for sure when AI will take over the world.
Some experts believe that it could happen as soon as 2045, while
others believe that it is much further off, perhaps even centuries
away. There are a number of factors that could contribute to an AI
takeover, including the development of superintelligence, the rise of
autonomous weapons, and the increasing reliance on AI for critical
tasks. However, it is also possible that AI will never take over the
world, and that humans and AI will be able to coexist peacefully.
Here are some of the factors that could contribute to an AI takeover:
The development of superintelligence: Superintelligence is a
hypothetical type of AI that would be vastly more intelligent than any
human. If such an AI were to be developed, it could potentially pose a
serious threat to humanity.
The rise of autonomous weapons: Autonomous weapons are weapons that
can select and engage targets without human intervention. These
weapons could be used by AI to attack humans, or they could be used by
humans to attack other humans.
The increasing reliance on AI for critical tasks: AI is increasingly
being used for critical tasks, such as driving cars, making medical
diagnoses, and controlling power grids. If AI were to malfunction or
be hacked, it could cause widespread damage or even death.
Despite these risks, there are also a number of reasons to be
optimistic about the future of AI. AI has the potential to solve some
of the world's most pressing problems, such as climate change,
poverty, and disease. AI can also be used to improve our lives in many
ways, such as by providing us with personalized healthcare, education,
and entertainment.
Ultimately, the future of AI is uncertain. It is possible that AI will
take over the world, but it is also possible that AI will be used to
improve our lives. Only time will tell what the future holds for AI.
------------------
BillK
More information about the extropy-chat
mailing list