[ExI] Digital Consciousness .
Anders Sandberg
anders at aleph.se
Fri Apr 26 09:19:46 UTC 2013
On 26/04/2013 05:43, spike wrote:
>> ...A question of academic interest only. A much more interesting and
> practical question is will it be ethical for the machines to shoot us, and a
> even more interesting question is will they shoot us? John K Clark
> _______________________________________________
>
> We have all these robo-soldiers and missile-armed drones in the military's
> inventory now. I can see how AI friendliness is a topic which absorbed the
> attention of Eliezer and his crowd, long before we had all that stuff.
It is also a surprisingly fertile question ("Infinite cookie-jar" as Eli
put it). It gets philosophers, mathematicians and computer scientists to
work together. We had a long brainstorming session yesterday about one
set of approaches extending the beyond-mindbending Löb's theorem (
https://en.wikipedia.org/wiki/L%C3%B6b%27s_theorem , see also
http://yudkowsky.net/rational/lobs-theorem for a cartoon guide) in
certain directions. Followed by a visit of a game theorist who was a bit
shocked that the philosophy department was pumping him about numerical
methods for calculating Nash equilibria.
I belong to the "scruffy" camp of AI safety people: I think those "neat"
attempts of formulating mathematically provable safety systems are not
going to work, but enough layers of decent safety is both implementable
and has a reasonable chance of working. Of course, the best safety would
be to have an intelligence explosion based on human-based minds
integrated in a cooperative framework (a so-called "society"), but we do
not have any proof or strong evidence that this is achievable or will
happen before de novo AI.
Of course, these AI safety considerations tend to be for the high end AI
rather than the drones and autonomous cars. There is a separate
community of robot ethicists that are doing some practical (?) work
there. The basic rule is that a good engineer should engineer away
risks, but (1) open systems in contact with the world usually cannot be
proven safe even in theory, (2) the pattern of risks engineered away is
deep down an ethical choice, and this is typically not recognized by the
engineers or the people hiring them. The roboethicists are pretty good
at pointing out aspects of (2), although I don't think the engineers
listen to them much. I think I ought to write more about (1) - there are
some very cool links to computer security theory.
--
Anders Sandberg,
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
Oxford University
More information about the extropy-chat
mailing list