[ExI] Immeasurable hubris
protokol2020 at gmail.com
Tue Sep 9 10:56:03 UTC 2014
People often understand algorithms quite well. They really do. But they
quite sux at implementing them. They are bad programmers.
Good programmers on the other hand, have a very weak understanding of
evolution. (As even many evolutionist have the same problem!).
So, the intersection of these two sets is small, but growing.
Sooner or later the field will explode with its potentials and we will have
quite an interesting situation.
On Tue, Sep 9, 2014 at 12:38 PM, Anders Sandberg <anders at aleph.se> wrote:
> Tomaz Kristan <protokol2020 at gmail.com> , 8/9/2014 12:53 AM:
> The least I want is some kind of arms race around those algorithms. So I
> don't talk a lot about details. But it is very on-topic here to inform
> about the possibility of quite sudden breakthrough here or there.
> Hmm. I see your point. At the same time there is a kind of paradox in
> trying to do something sensible with a potential information hazard: nobody
> really pays attention unless you demonstrate something impressive, and then
> everybody takes off to do it. It is hard to argue that X is a potential
> risk or opportunity unless somebody has demonstrated it - which is why so
> much AI and synthetic biology safety discussion is more stupid than it
> should be.
> Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat