[ExI] John's idea
William Flynn Wallace
foozler83 at gmail.com
Tue Jan 31 23:01:28 UTC 2017
So my libertarian beliefs must evolve with circumstances.
OK, so we feed people etc. who do not work. Immoral not to.
But in turn, we cannot tolerate increasing population with no end in
sight. China's policy was a disaster, but I think I can tolerate
infringing on people's right to have children to limit them to two. Maybe
along the way that'll be lowered to one.
I know that this group has ignored my attitude about over population
before, but nobody wants to have all wild animals in small parks and the
rest of the planet featuring concrete and high rises. Do they? Billions
and billions more people - what's the point? In the past people wanted as
many children as they could have to provide for them in their old age as
well as to labor on the family farm or business.
Robots and Social Security have made all those children superfluous. We
need more people to sit around and watch TV and be on the dole? A better
recipe for existential angst I cannot imagine.
Starting right now we need to find work for people. People aren't evolved
to sit all day long. They are evolved to work at something and I am not
talking about picking up trash either. Maybe for people below IQ 70 or
something. We also have wasted talent - Ph. D.s (not in English or social
science) driving cabs and the like.
I think providing health care and pensions is keeping a lot of people from
being hired. Adjuncts without those are way too popular at colleges
On Mon, Jan 30, 2017 at 3:30 PM, John Clark <johnkclark at gmail.com> wrote:
> On Mon, Jan 30, 2017 William Flynn Wallace <foozler83 at gmail.com> wrote:
>> How in the world would you ever program an AI to decide which project
>> will turn out to be the next iPhone or Hula Hoop?
> I can't give you the computer code to show exactly how but if a AI can
> diagnose disease better than any human I see no reason why a AI couldn't
> manage a hedge fund better than any human. I don't believe in the secret
> sauce theorem, the idea that the human brain has a certain something that
> computers can never duplicate. I have always thought this but I figured it
> was so far in the future before AIs would have practical significance that
> there was no point in bringing it up in a thread about current events or
> libertarian philosophy. I figured wrong, the economic consequences of AI
> are relevant right now and will become more so every single day. So my
> libertarian beliefs must evolve with circumstances.
>> we all know many cases in many areas, such a book publishing, where a
>> great book had to go through a dozen publishers to find one that will print
> So human capitalists are far from perfect and there is plenty of room for
> improvement. Cue the robots.
> John K Clark
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat