[extropy-chat] Two draft papers: AI and existential risk; heuristics and biases

Jef Allbright jef at jefallbright.net
Tue Jun 13 17:57:11 UTC 2006


On 6/13/06, Mikko Särelä <msarela at cc.hut.fi> wrote:
> On Tue, 13 Jun 2006, Jef Allbright wrote:
> > On 6/12/06, Eliezer S. Yudkowsky <sentience at pobox.com> wrote:
> > > (4) I'm not sure whether AIs of different motives would be willing to
> > > cooperate, even among the very rare Friendly AIs.  If it is *possible*
> > > to proceed strictly by internal self-improvement, there is a
> > > *tremendous* expected utility bonus to doing so, if it avoids having
> > > to share power later.
> >
> > Eliezer, most would agree that there are huge efficiencies to be gained
> > over the evolved biological substrate, but I continue to have a problem
> > with your idea that a process can recursively self-improve in isolation.
> > Doesn't your recent emphasis on perception being the perception of
> > difference (which I strongly agree with) highlight the contradiction and
> > the enormity of the "if" in "if it is *possible* to proceed strictly by
> > internal self-improvement"?
>
> Internal workings of a system are also part of the percieved reality. One
> can test out another algorithm for indexing data and notice that it works
> better. Completely internally. And still percieving the difference. Or one
> could prove that a certain algorithm for searching data is more efficient
> than another. And self-improve. The software and hardware are part of the
> reality.
>

The problem is in the concept of "works better".  Where does the
knowledge defining what is better (necessarily more refined than
present internal knowledge) come from, if not from some form of
competition with that which is external to the present system?

- Jef




More information about the extropy-chat mailing list