[ExI] Survival (was: elections again)
Jef Allbright
jef at jefallbright.net
Tue Jan 1 20:30:52 UTC 2008
On 1/1/08, Bryan Bishop <kanzure at gmail.com> wrote:
> > > > > > Growing smarter is not a simple brute-force search. Even
> > > > > > a super-smart AI won't instantly have god-like powers.
> > > >
> > > > I rather think he will.
> > >
> > > Even an ai is bounded by the laws of physics [whatever they may
> > > be].
> >
> > "Any sufficiently advanced technology is indistinguishable from
> > magic. "
>
> Good call. But please direct your attention below.
>
> > > > > > have to perform slow physical experiments in the real world
> > > > > > of physics to discover or build faster communications,
> > > > > > transportation, and utilization of resources. They also will
> > > > > > have to build factories to build future hardware upgrades.
> > > > > > These macro, physical processes are slow and easily
> > > > > > disrupted. It it not clear to me that even a
> > > > > > super-intelligent AI can quickly or easily accomplish
> > > > > > anything that we really want to stop.
> > > > > >
> > > > > > I'd like to see some specific scenarios that rely on
> > > > > > something more specific than "...a miracle/singularity occurs
> > > > > > here..."
>
> Harvey was asking for more than "magic occurs here." Can we rely on
> magical miracles to lead us to ai? Ai or the singularity is way too
> important to be left to magic.
This touches on a key point that seems to elude the most outspoken
proponents of hard take-off singularity scenarios: So-called
"recursively self-improving" intelligence is relevant only to the
extent it improves via selective interaction with its environment. If
the environment lacks requisite variety, then the "recursively
self-improving" system certainly can go "vwhooom" as it explores
possibility space, but the probability of such explorations having
relevance to our world becomes minuscule, leaving such a system hardly
more effective than than a cooperative of technologically augmented
humans at tiling the galaxy with paperclips.
This suggests a ceiling on the growth of **relevant** intelligence of
a singleton machine intelligence to only slightly above the level
supported by all available knowledge and its latent connections,
therefore remaining vulnerable to the threat of asymmetric competition
with a broad-based system of cooperating technologically augmented
specialists.
- Jef
More information about the extropy-chat
mailing list