[ExI] AI thoughts

Jason Resch jasonresch at gmail.com
Tue Nov 21 20:08:37 UTC 2023


On Tue, Nov 21, 2023 at 2:24 PM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> The LLM of AI chatbots are trained on what humans know.  They don't
> seem any more dangerous to me than exceptionally well-informed humans.
>
> Of course, I could be wrong.
>

Nick Bostrom in "Superintelligence" defines three possible dimensions of
superintelligence:

1. Speed
2. Depth
3. Breadth

You have identified that LLMs, in having read everything on the Internet,
every book, every wikipedia article, etc. already vastly outstrip humans in
terms of their breadth of knowledge. I think another dimension in which
they outpace us already is in terms of speed. They can compose entire
articles in a matter of seconds, which might take a human hours or days to
write. This is another avenue by which superintelligence could run-away
from us (if a population of LLMs could drive technological progress forward
100 years in one year's time, then humans would no longer be in control,
even if individual LLMs are no smarter than human engineers).

I think depth of reasoning may be one area where the best humans are
currently dominant, but a small tweak, such as giving LLMs a working memory
and recursion, as well as tightening up their ability to make multiple
deductive logical steps/leaps, could quickly change this.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20231121/8db61d7b/attachment-0001.htm>


More information about the extropy-chat mailing list