[ExI] thawing of ai winter?

Keith Henson hkeithhenson at gmail.com
Sat Apr 18 18:09:37 UTC 2020


Ben Zaiboc <ben at zaiboc.net> wrote:
To: extropy-chat at lists.extropy.org
Subject:
Message-ID: <2d91a0ac-15e2-de1b-166a-02036f0cca87 at zaiboc.net>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 18/04/2020 01:51, Rafal wrote:
>> There may be two ways of doing it. The Tesla way is to use gazillions
>> of data points from millions of drivers to force the deep learning
>> network to generate all the solutions for multi-layered analysis of
>> the world, in a way recapitulating the evolution from the lowly worm
>> all the way to near-human understanding. Since the original
>> evolutionary process took 600 million years, this method might take
>> longer than expected. The other way is to look at the pre-wired brain
>> structure and try to transfer insights from there to pre-wire a deep
>> learning network.
>
>> Does it make sense?

> Makes a great deal of sense to me, but it's not the people on this list
that need convincing, and I'm the wrong Ben. I had an exchange years ago
with a more relevant Ben (Goertzel) about the relevance of modelling
existing brains to AI research, but he wasn't a fan, and thought a more
direct computational approach was the way to go, bypassing biological
brains altogether.

My work in evolutionary psychology makes me think that building AI
based on human brains is an intolerably dangerous approach.

Humans have rarely invoked psychological traits such as
capture-bonding and those related to going to war and the perhaps
related trait of being infested with religions.

A human-based AI that understood a looming resource crisis could go to
war with humans.

> Maybe he, or other AI researchers can be persuaded
otherwise now.

I hope not.

Keith


More information about the extropy-chat mailing list