[ExI] Future of Humanity Institute at Oxford University £1 million grant for AI
Giulio Prisco
giulio at gmail.com
Sat Jul 4 05:15:43 UTC 2015
Anders says:"But the current approach to AI safety is like never
talking with the kids about morals, emotions or societal conventions,
nor giving them feedback on what they do except instrumental success
("Great work in forcing open the gun cabinet!") What we aim at doing
is rather like figuring out what kind of upbringing is less likely to
produce school shootings, sociopathy or unhappy career choices."
Figuring out the best kinds of upbringing is an experimental science,
you need to study what actually happened in the lives of many persons
and try to correlate that with their upbringing (and you know that the
results of these studies can be quite counter-intuitive). We have no
data points for AIs.
Also, I have a hunch that if you examine one of these studies you find
that the deviations from any correlation are bigger for smart and
emotionally strong people, because they are better able to shed their
conditioning one way or another.
For superAIs, remember that using the analogy in Nick's book we are
talking of _really_ smarter entities, not in the sense that Einstein
is smarter than the village idiot, but in the sense that humans are
smarter then beetles. Beetles couldn't control humans for long - they
couldn't lock me in a room, because they aren't smart enough to have
locks and keys. Etc.
Don't get me wrong, I am super happy that the FHI got the funding
because you and Nick are my friends and I am sure the FHI will do
something good with the money, but I still think that hoping to
influence, control, condition, program superAIs is a contradiction in
terms.
G.
On Fri, Jul 3, 2015 at 6:57 PM, Anders Sandberg <anders at aleph.se> wrote:
> Från: Giulio Prisco <giulio at gmail.com>
>
> They should have sent a couple of hundred bucks my way, and I would
> have advised them to leave the rest of the money in the bank.
> Superintelligent AIs will do what they want to do. That's the
> definition of intelligence, super or not. Trying to program or enforce
> behaviors or values in a super-smart AI is like telling your smart and
> rebellious kids to stay home and study instead of going out and have
> fun. Same thing, and same result.
>
>
> But the current approach to AI safety is like never talking with the kids
> about morals, emotions or societal conventions, nor giving them feedback on
> what they do except instrumental success ("Great work in forcing open the
> gun cabinet!") What we aim at doing is rather like figuring out what kind of
> upbringing is less likely to produce school shootings, sociopathy or unhappy
> career choices.
>
> Also, there are the lesser AIs to be concerned about. You want to make sure
> they can interpret our intentions, laws or norms in ways that actually
> works. Superintelligent entities may be smart enough to be safe even when
> merely "smart" agents are very unsafe (but see the whole analysis of why
> emergent AI values are not guaranteed to stay close to ours or anything
> sane; Inceptionist pictures are a pretty good example of what happens when
> we let AI preferences run free
> http://d.ibtimes.co.uk/en/full/1445360/psychedelic-images-generated-by-googles-neural-network.jpg
> )
>
>
> Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford
> University
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
More information about the extropy-chat
mailing list