[ExI] AI thoughts

Keith Henson hkeithhenson at gmail.com
Wed Nov 22 00:25:02 UTC 2023


On Tue, Nov 21, 2023 at 12:08 PM Jason Resch <jasonresch at gmail.com> wrote:
>
> On Tue, Nov 21, 2023 at 2:24 PM Keith Henson via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>
>> The LLM of AI chatbots are trained on what humans know.  They don't
>> seem any more dangerous to me than exceptionally well-informed humans.
>>
>> Of course, I could be wrong.
>
> Nick Bostrom in "Superintelligence" defines three possible dimensions of superintelligence:
>
> 1. Speed
> 2. Depth
> 3. Breadth
>
> You have identified that LLMs, in having read everything on the Internet, every book, every wikipedia article, etc. already vastly outstrip humans in terms of their breadth of knowledge. I think another dimension in which they outpace us already is in terms of speed. They can compose entire articles in a matter of seconds, which might take a human hours or days to write.

Agree.

> This is another avenue by which superintelligence could run-away from us (if a population of LLMs could drive technological progress forward 100 years in one year's time,

Drexler talked about this in the early 80s.  Even the combination of
human engineers and LLMs is going to speed things up somewhat.

> then humans would no longer be in control, even if individual LLMs are no smarter than human engineers).

I don't think humans have been in control for a long time, certainly
not individual humans, and I just don't believe in "elders" or any
other group that is exerting control.  Every single human is like a
wood chip on a river.

> I think depth of reasoning may be one area where the best humans are currently dominant, but a small tweak, such as giving LLMs a working memory and recursion, as well as tightening up their ability to make multiple deductive logical steps/leaps, could quickly change this.

Can you make a case that it would be worse than the current situation?

Keith

> Jason
>
>



More information about the extropy-chat mailing list