[ExI] Watson on NOVA
sjatkins at mac.com
Tue Feb 15 19:44:12 UTC 2011
On 02/15/2011 08:58 AM, spike wrote:
> ...On Behalf Of Richard Loosemore
>> ...There is nothing special about me, personally, there is just a peculiar
> fact about the kind of people doing AI research, and the particular obstacle
> that I believe is holding up that research at the moment...
> Ja, but when you say "research" in reference to AI, keep in mind the actual
> goal isn't the creation of AGI, but rather the creation of AGI that doesn't
> kill us.
Well, no. Not any more than the object of having a child is to have a
child that has zero potential of doing something horrendous. Even less
so than in that analogy as an AGI child is a radically different type of
being of potentially radically more power than its parents. I don't
believe for an instant that it is possible to ensure such a being will
never ever harm us by any act of omission or commission that it will
ever take in all of its changes over time. I find it infinitely more
hubristic to think that we are capable of doing so than to think that we
can create the AGI or the seed of one in the first place.
> After seeing the amount of progress we have made in nanotechnology in the
> quarter century since the K.Eric published Engines of Creation, I have
> concluded that replicating nanobots are a technology that is out of reach of
> human capability.
Not so. Just a good three decades further out.
> We need AI to master that difficult technology. Without
> replicating assemblers, we probably will never be able to read and simulate
> frozen or vitrified brains. So without AI, we are without nanotech, and
> consequently we are all doomed, along with our children and their children
Well, there is the upload path as one alternative.
> On the other hand, if we are successful at doing AI wrong, we are all doomed
> right now. It will decide it doesn't need us, or just sees no reason why we
> are useful for anything.
The maximal danger is if it decides we are a) in the way of what it
wants/needs to do and b) do not have enough mitigating worth to receive
sufficient consideration to survive. A lesser danger is that there
simply is not a niche left for us and the AGI[s] either find us of
insufficient value to preserve us anyway or humans cannot survive on
such a reservation or as pets.
It is quite possible that billions of humans without AGI will eventually
find there is no particular niche they can fill in any case.
> When I was young, male and single (actually I am still male now) but when I
> was young and single, I would have reasoned that it is perfectly fine to
> risk future generations on that bet: build AI now and hope it likes us,
> because all future generations are doomed to a century or less of life
> anyway, so there's no reasonable objection with betting that against
I am still pretty strongly of the mind that AGI is essential to humanity
surviving this century. A most necessary but not necessarily
> Now that I am middle aged, male and married, with a child, I would do that
> calculus differently. I am willing to risk that a future AI can upload a
> living being but not a frozen one, so that people of my son's generation
> have a shot at forever even if it means that we do not. There is a chance
> that a future AI could master nanotech, which gives me hope as a corpsicle
> that it could read and upload me. But I am reluctant to risk my children's
> and grandchildren's 100 years of meat world existence on just getting AI
> going as quickly as possible.
This may doom us all if AGI is indeed critical to our species survival.
I believe it is as the complexity and velocity of potentially deadly
problems increases without bound as technology accelerates while human
intelligence, even with increasingly powerful (but not AGI) computation
and communication is bounded.
More information about the extropy-chat