[ExI] Watson on NOVA

Richard Loosemore rpwl at lightlink.com
Tue Feb 15 17:34:30 UTC 2011


spike wrote:
> ...On Behalf Of Richard Loosemore
> 
>> ...There is nothing special about me, personally, there is just a peculiar
> fact about the kind of people doing AI research, and the particular obstacle
> that I believe is holding up that research at the moment...
> 
> Ja, but when you say "research" in reference to AI, keep in mind the actual
> goal isn't the creation of AGI, but rather the creation of AGI that doesn't
> kill us.  
> 
> After seeing the amount of progress we have made in nanotechnology in the
> quarter century since the K.Eric published Engines of Creation, I have
> concluded that replicating nanobots are a technology that is out of reach of
> human capability.  We need AI to master that difficult technology.  Without
> replicating assemblers, we probably will never be able to read and simulate
> frozen or vitrified brains.  So without AI, we are without nanotech, and
> consequently we are all doomed, along with our children and their children
> forever.
> 
> On the other hand, if we are successful at doing AI wrong, we are all doomed
> right now.  It will decide it doesn't need us, or just sees no reason why we
> are useful for anything.
> 
> When I was young, male and single (actually I am still male now) but when I
> was young and single, I would have reasoned that it is perfectly fine to
> risk future generations on that bet: build AI now and hope it likes us,
> because all future generations are doomed to a century or less of life
> anyway, so there's no reasonable objection with betting that against
> eternity.
> 
> Now that I am middle aged, male and married, with a child, I would do that
> calculus differently.  I am willing to risk that a future AI can upload a
> living being but not a frozen one, so that people of my son's generation
> have a shot at forever even if it means that we do not.  There is a chance
> that a future AI could master nanotech, which gives me hope as a corpsicle
> that it could read and upload me.  But I am reluctant to risk my children's
> and grandchildren's 100 years of meat world existence on just getting AI
> going as quickly as possible.
> 
> In that sense, having AI researchers wander off into making toys (such as
> chess software and Watson) is perfectly OK, and possibly desireable.
> 
>> ...Give me a hundred smart, receptive minds right now, and three years to
> train 'em up, and there could be a hundred people who could build an AGI
> (and probably better than I could)...
> 
> Sure but do you fully trust every one of those students?  Computer science
> students are disproportionately young and male.  
> 
>> ...So, just to say, don't interpret the previous comment to be too much of
> a mad scientist comment ;-)  Richard Loosemore
> 
> Ja, I understand the reasoning behind those who are focused on the goal of
> creating AI, and I agree the idea is not crazed or unreasonable.  I just
> disagree with the notion that we need to be in a desperate hurry to make an
> AI.  We as a species can take our time and think about this carefully, and I
> hope we do, even if it means you and I will be lost forever.
> 
> Nuclear bombs preceded nuclear power plants.

The problem is, Spike, that you (like many other people) speak of AI/AGI 
as if the things that it will want to do (its motivations) will only 
become apparent to us AFTER we build one.

So, you say things like "It will decide it doesn't need us, or just sees 
no reason why we are useful for anything."

This is fundamentally and devastatingly wrong.  You are basing your 
entire AGI worldview on a crazy piece of accidental black propaganda 
that came from science fiction.

In fact, their motivations will have to be designed, and there are ways 
to design those motivations to make them friendly.

The disconnect between the things you repeat (like "It will decide it 
doesn't need us") and the actual, practical reality of creating an AGI 
is so drastic that in a couple of decades this attitude will seem as 
antiquated as the idea that the telephone network would just 
spontaneously wake up and start talking to us.  Or the idea that one too 
many connections in the NY Subway might create a mobius loop that 
connects through to the fourth dimension.

Those are all great science fiction ideas, but they -- all three of them 
-- are completely bogus as science.  If you started claiming, on this 
list, that the Subway might accidentally connect to some other dimension 
just because they put in one too many tunnels, you would be dismissed as 
a crackpot.  What you are failing to get is that current naive ideas 
about AGI motivation will eventually seem silly.


And, I would not hire a gang of computer science students:  that is 
exactly the point.  They would be psychologists AND CS people, because 
only that kind of crowd can get over these primitive mistakes.



Richard Loosemore






More information about the extropy-chat mailing list