[ExI] AI extinction risk

Eugenio Martínez rolandodegilead at gmail.com
Mon Mar 17 01:29:48 UTC 2014


>
> When robot machinery replaces even the slave labour and computer tech
> replaces professional workers, then the consumer society is finished.
> People without an income don't buy enough stuff to maintain the current
> system.


I don´t want to be seen as a comunist, but, if AI is going to become real,
we better start to change the current system before it trigger our
extinction.

Alternatives?  I see two clear:

1.Working credits: You have an amount of work assigned and you can invest
it wherever you want, so it can give you more money. And you can buy more
working credits etc.

2.IA works, money is shared equally to every human. So everybody has food,
health, existence is assured, education is assured and people doesn´t have
to invest 1/3 of his life doing things to try to "keep flying" the other
2/3.

First is just an extension of today´s system, but everybody works in the
same (invest work credits) and.. everybody works. And there are poor and
rich people.

Second sounds like an utopy. Not only that: Is a utopy unbeatable, And
indestructible, as long as IA police would keep the system running.





On Sun, Mar 16, 2014 at 1:42 PM, Tim Tyler <tim at tt1.org> wrote:

>  On 15/03/2014 09:32, Bill Hibbard wrote:
>
>  My recent papers about technical AI risk conclude with:
>
>   This paper addresses unintended AI behaviors. However,
>   I believe that the greater danger comes from the fact
>   that above-human-level AI is likely to be a tool in
>   military and economic competition among humans and thus
>   have motives that are competitive toward some humans.
>
>
> Military and economic competition between groups seem far more likely
> to extinguish specific individuals to me too. It would therefore make
> considerable sense for individuals to focus on these kinds of problem.
>
> The rationale given for focusing on other risk scenarios seems to be
> that military and economic competition between groups is *relatively*
> unlikely to destroy everything - whereas things like "grey goo" or
> civilization-scale wireheading could potentially result in everyone's
> entire future being destroyed.
>
> Any evolved predispositions humans have are likely to direct them
> to focus on the first type of risk. I figure that these more personal
> risks will receive plenty of attention in due course.
> --
> __________
>  |im |yler  http://timtyler.org/  tim at tt1lock.org  Remove lock to reply.
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>


-- 
OLVIDATE.DE
Tatachan.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140317/b487ffbb/attachment.html>


More information about the extropy-chat mailing list