[ExI] nick's book being sold by fox news
giulio at gmail.com
Tue Oct 28 09:47:39 UTC 2014
I don't think a paperclipper would stay a paperclipper forever. Soone
ror later it would expand. In general, I don't think we can design and
freeze the value and motivational systems of an entity smarter than
us, for the same reasons we can't do that with children. At some point
the entity would start to do what _he_ wants. Isn't that part of the
definition of intelligence?
I don't dismiss the possibility of accidental extermination by a
paperclipper still in the paperclipper phase, but I think the
paperclipper would become a mind child in the long term, perhaps
incorporating mind grafts from you and I.
On Tue, Oct 28, 2014 at 10:31 AM, Anders Sandberg <anders at aleph.se> wrote:
> Giulio Prisco <giulio at gmail.com> , 28/10/2014 7:36 AM:
> As I said in my review of Nick's book, that depends on what is "our
> species." If it includes our future intelligent and super-intelligent
> mind children, then there is no danger of extermination by
> But this presupposes that it must be a continuation of us or at least
> something valuable, not a paperclipper or a mindless world of
> superintelligent but non-conscious corporations. You write that you have a
> hard time imagining something smart without a sense of self, yet we know
> AIXI - remember, it is as smart as or smarter than any other system - would
> assign zero probability to itself existing even if it existed!
> I like the idea of mind children continuing our civilization. But that is
> very different from being concerned about risk: just because a power plant
> might produce energy to cheap to meter doesn't mean it is a waste of time
> doing a risk analysis.
> Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
More information about the extropy-chat