[ExI] Mathematicians as Friendliness analysts

Samantha Atkins sjatkins at mac.com
Tue Nov 16 06:10:30 UTC 2010


On Nov 14, 2010, at 6:57 PM, Richard Loosemore wrote:

> Michael Anissimov wrote:
>> On Sat, Nov 13, 2010 at 2:10 PM, John Grigg <possiblepaths2050 at gmail.com <mailto:possiblepaths2050 at gmail.com>> wrote:
>>     And I noticed he did "friendly AI research" with
>>    a grad student, and not a fully credentialed academic or researcher.
>> Marcello Herreshoff is brilliant for any age.  Like some other of our Fellows, he has been a top-scorer in the Putnam competition.  He's been a finalist in the USA Computing Olympiad twice.  He lives and breathes mathematics -- which makes sense because his dad is a math teacher at Palo Alto High School.  Because Friendly AI demands so many different skills, it makes sense for people to custom-craft their careers from the start to address its topics.  That way, in 2020, we will have people have been working on Friendly AI for 10-15 years solid rather than people who have been flitting in and out of Friendly AI and conventional AI.
> 
> Michael,
> 
> This is entirely spurious.  Why gather mathematicians and computer science specialists to work on the "friendliness" problem?
> 
> Since the dawn of mathematics, the challenges to be solved have always been specified in concrete terms.  Every problem, without exception, is definable in an unambiguous way.  The friendliness problem is utterly unlike all of those.  You cannot DEFINE what the actual problem is, in concrete, unambiguous terms.
> 

Mathematics may be said to be the study of pattern qua pattern, of patterns of patterns.    Friendliness not able to be captured or described accurately or ever measured or used to measure alternatives would not be an engineering goal at all.   Personally I think it is so vague as to be useless.   I would rather see work on a general ethics that applies even to beings of wildly different capabilities that are not mutually interdependent.   This seem much more likely to lead to benign behavior by an advanced AGI toward humans than attempting to coerce Friendliness at an engineering level.
Of course the rub with this general ethics is that humans don't even seem able to come up with a generally agreed ethics for the much narrower case of other members of their own species.    This suggests that either such a general ethics is impossible or that humans are not very good at all at ethical reasoning.

- samantha




More information about the extropy-chat mailing list