[ExI] Hofstadter now fears AI

Keith Henson hkeithhenson at gmail.com
Fri Jul 7 16:35:07 UTC 2023


Back during the time I was posting on sl4, I might have slightly
improved the safety of AI research.  At that time, one of the paths to
AI being considered was to blindly copy a human brain into fast
hardware.

I threw cold water on that idea because at the time I was working on
poorly recognized human psychological traits such as
https://en.citizendium.org/wiki/Capture-bonding and the psychological
mechanisms humans have that lead them into war.  I think the path to
war is tripped off by the detection of a looming resource shortage.
Humans are generally blind to understanding they have these evolved
psychological mechanisms.

Blindly copying a brain into a powerful AI which could see a looming
resource crisis and go to war with humans over such resources seems
like a really bad idea.

The difficulty of the problem of scanning a brain may have been more
significant than my concerns but in any case, you don't hear much
about this approach to AI today.

Keith

On Fri, Jul 7, 2023 at 7:23 AM Brent Allsop via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
>
> No, AI will be much smarter than current average humans.
> But re architected brain intelligence is about to start exploding even faster than all that, to say nothing of using AI's as personal intelligence amplification systems.
> Uploading to exponentially more capable system, uploading to bodies that don't need space or diving suites...
> Why does everyone completely ignore all that?
>
>
> On Thu, Jul 6, 2023 at 6:31 PM BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>
>> “AI is about to eclipse humans”, predicts a leading academic
>> 04/07/2023
>>
>> <https://nationworldnews.com/ai-is-about-to-eclipse-humans-predicts-a-leading-academic/>
>>
>> Quotes:
>> Douglas Hofstadter has been a strong advocate of separating the terms
>> intelligence and artificial intelligence over the years, not
>> considering that technologies based on machine learning and other
>> models are not worthy of being considered as intelligent as humans. So
>> far, when systems like ChatGPT have sabotaged the plans of this
>> academic He is “terrified” by the future of these powerful AI models.
>>
>> “Very soon, it’s quite possible that they (ChatGPT and other AIs) will
>> be smarter than us, much smarter than us,” Hofstadter said in an
>> interview.
>>
>> He says that his faith has been shaken by what he describes as a
>> traumatic experience, believing that humans will soon be eclipsed by
>> machines.
>> --------------------
>>
>> The relevant 8 minutes of the video interview is here:
>> <https://www.youtube.com/watch?v=lfXxzAVtdpU&t=1763s>
>> -------------
>>
>> It sounds like he is now agreeing with Eliezer.
>>
>> BillK
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list