[ExI] The AGI and limiting it
Richard Loosemore
rpwl at lightlink.com
Thu Mar 6 17:48:47 UTC 2008
Lee Corbin wrote:
> Richard writes
>
>> Lee Corbin wrote:
>>
>>> Again, my sincere apologies if this is all old to you
>>> [those who appeared to be new to the subject]
>>> But the reason that Robert Bradbury, John Clark,
>>> and people right here consider that we are probably
>>> doomed is because you can't control something that
>>> is far more intelligent than you are.
>> The analysis of AGI safety given by Eliezer is weak to the point of
>> uselessness, because it makes a number of assumptions about the
>> architecture of AGI systems that are not supported by evidence
>> or argument.
>
> Sorry, but quite a number of us have found those arguments to be
> very convincing, though, of course by no means the final word.
Arguments always look convincing when not subjected to skeptical
challenge. Aristotle got a lot of mileage that way.
It is distressing to see so much that is so obviously silly believed by
so many.
>> Your comment "I know this seems difficult to believe, but that is what
>> people have concluded who've thought about this for years and years and
>> years" makes me smile.
>
> Yes, I should have acknowledged the existence of the dissenting views
> (which have traditionally received little support here on the Extropian
> list).
Those with a scientific attitude should discuss these issues by looking
at the arguments involved in a dispassionate manner.
I have found that on the AGI and Singularity lists people often do
exactly that. There is sometimes vigorous disagreement, but for the
most part this disagreement is about the issues themselves, not about
the personalities.
By contrast, I found that elsewhere, as in your comment above, the
common response is to say that *because* a majority of people on SL4 or
on the Extropian list take a dim view of these alternative ideas about
the friendliness problem, therefore this majority vote counts as some
kind of argument.
>> Some of those people who have thought about it for years and years and
>> years were invited to discuss these issues in greater depth, and examine
>> the disputed assumptions. The result? They mounted a vitriolic
>> campaign of personal abuse against those who wanted to suggest that
>> Eliezer might not be right, and banned them from the SL4 mailing list.
>
> I.e., you got banned. How many other people were banned from that list
> simply because they disagreed with the majority?
The phrasing of this question is a little insidious: you clearly imply
that the banning was the result of simply disagreeing with the majority.
Clever. It changes the subject of discussion from "The destructive
consequences that ensued when someone tried to get an alternative point
of view about friendliness discussed on SL4" to the highly personalized
topic of why *I* in particular was banned from SL4.
>> You will find that a much broader and more vigorous discussion of AI
>> safety issues has been taking place on the AGI mailing list for some
>> time now.
>
> Thanks for the information. You probably should provide a link.
The link is in a separate message: my mistake for forgetting to include
it in the first place.
Richard Loosemore
More information about the extropy-chat
mailing list