[ExI] Watson on NOVA

spike spike66 at att.net
Tue Feb 15 16:58:12 UTC 2011


...On Behalf Of Richard Loosemore

>...There is nothing special about me, personally, there is just a peculiar
fact about the kind of people doing AI research, and the particular obstacle
that I believe is holding up that research at the moment...

Ja, but when you say "research" in reference to AI, keep in mind the actual
goal isn't the creation of AGI, but rather the creation of AGI that doesn't
kill us.  

After seeing the amount of progress we have made in nanotechnology in the
quarter century since the K.Eric published Engines of Creation, I have
concluded that replicating nanobots are a technology that is out of reach of
human capability.  We need AI to master that difficult technology.  Without
replicating assemblers, we probably will never be able to read and simulate
frozen or vitrified brains.  So without AI, we are without nanotech, and
consequently we are all doomed, along with our children and their children
forever.

On the other hand, if we are successful at doing AI wrong, we are all doomed
right now.  It will decide it doesn't need us, or just sees no reason why we
are useful for anything.

When I was young, male and single (actually I am still male now) but when I
was young and single, I would have reasoned that it is perfectly fine to
risk future generations on that bet: build AI now and hope it likes us,
because all future generations are doomed to a century or less of life
anyway, so there's no reasonable objection with betting that against
eternity.

Now that I am middle aged, male and married, with a child, I would do that
calculus differently.  I am willing to risk that a future AI can upload a
living being but not a frozen one, so that people of my son's generation
have a shot at forever even if it means that we do not.  There is a chance
that a future AI could master nanotech, which gives me hope as a corpsicle
that it could read and upload me.  But I am reluctant to risk my children's
and grandchildren's 100 years of meat world existence on just getting AI
going as quickly as possible.

In that sense, having AI researchers wander off into making toys (such as
chess software and Watson) is perfectly OK, and possibly desireable.

>...Give me a hundred smart, receptive minds right now, and three years to
train 'em up, and there could be a hundred people who could build an AGI
(and probably better than I could)...

Sure but do you fully trust every one of those students?  Computer science
students are disproportionately young and male.  

>...So, just to say, don't interpret the previous comment to be too much of
a mad scientist comment ;-)  Richard Loosemore

Ja, I understand the reasoning behind those who are focused on the goal of
creating AI, and I agree the idea is not crazed or unreasonable.  I just
disagree with the notion that we need to be in a desperate hurry to make an
AI.  We as a species can take our time and think about this carefully, and I
hope we do, even if it means you and I will be lost forever.

Nuclear bombs preceded nuclear power plants.

spike











More information about the extropy-chat mailing list