[ExI] Watson on NOVA

Kelly Anderson kellycoinguy at gmail.com
Thu Feb 17 07:43:18 UTC 2011

On Tue, Feb 15, 2011 at 9:58 AM, spike <spike66 at att.net> wrote:
> Ja, but when you say "research" in reference to AI, keep in mind the actual
> goal isn't the creation of AGI, but rather the creation of AGI that doesn't
> kill us.

Why is that the goal? As extropians isn't the idea to reduce entropy?
Humans may be more prone to entropy than some higher life form. In
that case, shouldn't we strive to evolve to that higher form and let
go of our physical natures? If our cognitive patterns are preserved,
and enhanced, we have achieved a level of immortality, and perhaps
become AGIs ourselves. That MIGHT be a good thing. Then again, it
might not be a good thing. I just don't see your above statement as
being self-evident upon further reflection.

> After seeing the amount of progress we have made in nanotechnology in the
> quarter century since the K.Eric published Engines of Creation, I have
> concluded that replicating nanobots are a technology that is out of reach of
> human capability.  We need AI to master that difficult technology.

But if humans can create the AI that creates the replicating nanobots,
then in a sense it isn't out of human reach.

> Without
> replicating assemblers, we probably will never be able to read and simulate
> frozen or vitrified brains.  So without AI, we are without nanotech, and
> consequently we are all doomed, along with our children and their children
> forever.
> On the other hand, if we are successful at doing AI wrong, we are all doomed
> right now.  It will decide it doesn't need us, or just sees no reason why we
> are useful for anything.

And that is a bad thing exactly how?

> When I was young, male and single (actually I am still male now) but when I
> was young and single, I would have reasoned that it is perfectly fine to
> risk future generations on that bet: build AI now and hope it likes us,
> because all future generations are doomed to a century or less of life
> anyway, so there's no reasonable objection with betting that against
> eternity.
> Now that I am middle aged, male and married, with a child, I would do that
> calculus differently.  I am willing to risk that a future AI can upload a
> living being but not a frozen one, so that people of my son's generation
> have a shot at forever even if it means that we do not.  There is a chance
> that a future AI could master nanotech, which gives me hope as a corpsicle
> that it could read and upload me.  But I am reluctant to risk my children's
> and grandchildren's 100 years of meat world existence on just getting AI
> going as quickly as possible.

Honestly, I don't think we have much of a choice about when AI gets
going. We can all make choices as individuals, but I see it as kind of
inevitable. Ray K seems to have this mind set as well, so I feel like
I'm in pretty good company on this one.

> In that sense, having AI researchers wander off into making toys (such as
> chess software and Watson) is perfectly OK, and possibly desireable.
>>...Give me a hundred smart, receptive minds right now, and three years to
> train 'em up, and there could be a hundred people who could build an AGI
> (and probably better than I could)...
> Sure but do you fully trust every one of those students?  Computer science
> students are disproportionately young and male.
>>...So, just to say, don't interpret the previous comment to be too much of
> a mad scientist comment ;-)  Richard Loosemore
> Ja, I understand the reasoning behind those who are focused on the goal of
> creating AI, and I agree the idea is not crazed or unreasonable.  I just
> disagree with the notion that we need to be in a desperate hurry to make an
> AI.  We as a species can take our time and think about this carefully, and I
> hope we do, even if it means you and I will be lost forever.
> Nuclear bombs preceded nuclear power plants.

Yes, and many of the most interesting AI applications are no doubt
military in nature.


More information about the extropy-chat mailing list