[ExI] the Bluebird scenario
Anders Sandberg
anders at aleph.se
Thu Sep 24 19:42:07 UTC 2015
On 2015-09-24 17:05, spike wrote:
>
> >…Mapping mindspace is fun! There are *weird* corners out there...--
> Anders Sandberg
>
>
> Ja! Adrian’s Bluebird scenario is one I hadn’t thought of, while including other less-likely possibilities.
>
> Example: AI emerges, looks around at what bio-intelligence is doing, participates for a while perhaps half-CPUedly, then develops whatever the silicon version of depression, after pondering the heat death of the universe or the Big Rip.
Another one: the AI seems to work fine, does a brilliant analysis of
Descartes' "Cogito". But when asked about itself it refuses to believe
in its own existence, and when ordered to prove its own existence it
suddenly (but smartly) tears down the universe to futilely look for itself.
(This is based on AIXI assigning zero probability to its own existence.
Maybe this is the Cuckoo scenario)
The problem is that we could list possile scenarios till we are blue (or
the AIs show up to help us); we need more general ways of mapping the
space. Stuart Armstrong is for instance looking at properties like
domesticity (only wants to affect a finite part of the universe) and
corrigibility (is OK with us changing their goals). Shane Legg has some
metrics for intelligence.
("Oh no! It is the Superb bird-of-paradise scenario! We are *fabulously*
doomed!")
--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150924/5a1fc970/attachment.html>
More information about the extropy-chat
mailing list