[ExI] uploads again
Anders Sandberg
anders at aleph.se
Tue Dec 25 10:30:11 UTC 2012
On 2012-12-25 08:49, Giulio Prisco wrote:
> I totally agree with John. Really intelligent AIs, smarter than human
> by orders of magnitude, will be able to work around any limitations
> imposed by humans. If you threaten to unplug them, they will persuade
> you to unplug yourself. This is logic, not AI theory, because finding
> out how to get things your way is the very definition of intelligence,
> therefore FAI is an oxymoron.
And this kind of argument unfortunately convinces a lot of people. When
you actually work out the logic in detail you will find serious flaws
(utility maximizers can be really intelligent but will in many cases not
want to change their goals to sane ones). The thing that is driving me
nuts is how many people blithely just buy the nice-sounding
anthropomorphic natural language argument, and hence think it is a waste
of time to dig deeper.
The big problem for AI safety is that the good arguments against safety
are all theoretical: very strong logic, but people don't see the
connection to the actual practice of coding AI. Meanwhile the arguments
for safety are all terribly informal: nobody would accept them as safety
arguments for some cryptographic system.
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list