[ExI] The Robot Big Bang

Mark Walker markalanwalker at gmail.com
Sun Feb 22 19:35:05 UTC 2015


Anders, you may be right about the cause of resistance: ideologically
pungent solutions. In practice, it is pretty hard to discuss problems
without also discussing proposed solutions. Even if you testify before
Congress about technological unemployment and take a vow of silence about
possible solutions, not everyone will keep quiet. Perhaps the best way to
get people to accept there is a problem is to offer a broader range of
ideological acceptable solutions. For example, I suspect many conservatives
here in the US would like the solution of making soylent green out of
redundant workers as opposed to BIG (basic income guarantee) for those
displaced by robots.

I'm writing a book on BIG to be published later this year. One of the
arguments for BIG is that it will help soften the blow of technological
unemployment. I also argue BIG will increase gross national happiness and
gross national freedom. One of the arguments that I think conservative
types will hate the most is that BIG should be considered to be like a
stock dividend. eBay shows us how to make money owning a market. The US is
a much more successful market than eBay, so it offers much greater
potential for profit making. The owners of the US market (US citizens)
shouldn't run their market like a dilapidated hippy co-op, but should try
to maximize profit in the same way that eBay does. This profit, then, can
be redistributed to shareholders in the form a dividend (BIG).
http://philos.nmsu.edu/files/2014/07/chapter-3-BIG-BOOK-2015.docx

Cheers,
Mark


Dr. Mark Walker
Richard L. Hedden Chair of Advanced Philosophical Studies
Department of Philosophy
New Mexico State University
P.O. Box 30001, MSC 3B
Las Cruces, NM 88003-8001
USA
http://www.nmsu.edu/~philos/mark-walkers-home-page.html

On Sun, Feb 22, 2015 at 7:57 AM, Anders Sandberg <anders at aleph.se> wrote:

> BillK <pharos at gmail.com> , 22/2/2015 11:35 AM:
>
> On 21 February 2015 at 23:54, Anders Sandberg wrote:
> <snip>
> > Understanding these complications and that there likely is a big
> automation
> > shift matters. As does explaining it properly to decisionmakers. I am a
> bit
> > worried that right now it turns into a simplistic "The robots are
> coming, so
> > we need basic income", which means some politicians will immediately
> accept
> > or dismiss it depending on their views of basic income, and hence deduce
> > that robots are either a problem or not a problem...
>
>
> A starving population is a political problem.
> Note the millions receiving food stamps (and the millions in prison)
> in the USA and the desperate attempts in the UK to try and reduce
> welfare costs.
>
>
>
> You are missing my point. What is seen as a problem often depends on one's
> political outlook. And whether a problem is acknowledged may depend on
> whether the solutions are acceptable or not.
>
> In the US poverty and incarceration are not seen as major problems by a
> large fraction of people. One strong reason IMHO is that many suggested
> solutions - redistribution, unified healthcare systems, a non-retributive
> penal system - are unacceptable to them for ideological reasons. Yes, this
> is totally backwards. In a sane world people would identify problems first,
> then look for solutions, and then agree on the acceptable ones. But in
> practice people turn things around. Which is why so many of your
> politicians are convinced there cannot be anthopogenic climate change - the
> proposed solutions smell bad ideologically.
>
>
> http://www.aleph.se/andart/archives/2014/06/do_we_have_to_be_good_to_set_things_right.html
>
> So if you want to sell politicians on the idea that the robots are coming,
> do not link it too strongly to a particular socioeconomic remedy.
>
> Otherwise, I foresee a real risk that we will end up with the US liberals
> embracing the robot big bang as a reason to have guaranteed basic income,
> and hence the US conservatives systematically blocking any research into AI
> consequences as a result. The end result might be no income and no safety
> at all.
>
>
>
> Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford
> University
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150222/5efcdc57/attachment.html>


More information about the extropy-chat mailing list