[ExI] Yes, the Singularity is the greatest threat to humanity

Stefano Vaj stefano.vaj at gmail.com
Tue Jan 18 15:58:19 UTC 2011


On 17 January 2011 23:54, Anders Sandberg <anders at aleph.se> wrote:
> Stefano Vaj wrote:
>> I am still persuaded that the crux of the matter remains a less
>> superficial consideration of concept such as "intelligence" or
>> "friendliness".  I suspect that at any level of computing power,
>> "motivation" would only emerge if a deliberate effort is made to emulate
>> human (or at least biological) evolutionary artifacts such as sense of
>> identity, survival istinct, etc., which would be certainly interesting,
>> albeit probably much less crucial to their performances and flexibility than
>> one may think.
>
> "Motivation" does not have to be anything like human motivations. As
> Wikipedia says, "Motivation is the driving force which causes us to achieve
> goals." - a chess playing system can be said to have a motivation to win
> games built into itself, just like Schmidthuber's Gödel machine and Hutter's
> AIXI have a motivation to maximize their utility functions.

Absolutely. But of course we could also try and emulate with arbitrary
degrees of accuracy "human-like" motivations.

Even though this would be an interesting and satisfying achievement
per se, it is not clear, besides performance issues, what it would
have to do with "intelligence" and "risk" in a broader and more
rigorous sense, but I suspect that only such an emulation would be
considered as an "AGI" by those who discuess the "friendliness" and
"unfriendliness" thereof.

> Actually thinking about the risks and problems before promoting technologies
> is a sane thing. If there is a big danger with it we better think about
> effective solutions to it. I'm rather a transluddite than promoter of every
> shiny new technology - cobalt bombs are shiny too.

Absolutely right again. I am only saying
a) that there is no shortage of people presenting the case against
(single) technology(ies) and/or of the precautionary principle;
b) it is by no means obvious that computers are made any more (or
less, for that matter: see under Robot-God) dangerous by
"intelligence" in the AGI sense.

-- 
Stefano Vaj




More information about the extropy-chat mailing list