[extropy-chat] Futures Past
d.assange at ugrad.unimelb.edu.au
Mon Oct 10 08:22:04 UTC 2005
"The way to enlightenment is not to admit that
you know nothing, but to admit that you
can never know enough."
On Mon, 10 Oct 2005, Russell Wallace wrote:
> Actually I'm inclined to agree with all this; I'm not a believer in the
> "superintelligent AI pops out of someone's basement into a world that
> otherwise looks mostly like today" scenario, and I was never a great fan of
> Singularity definitions based on unpredictability (then we've had zillions
> of them) or incomprehensibility (I don't believe anything in this universe
> is fundamentally incomprehensible to a sufficiently educated human mind).
That depends upon how you are defining `incomprehensible' and
`human'. There are many concepts, such as the fundamental nature of
quantum mechanics and >3 spatial dimensions, which can only be explained
with mathematical formulae and analogies; imagining such things as they
`truly are' being impossible for the human mind.
> >From a navigation perspective, I think what matters is the event horizon:
> the point at which we can no longer change the outcome, for good or bad; the
> board has been played into either a winning or losing position. That is the
> point which I estimate will come before 2100. (I won't be surprised if it
> comes as early as 2050; I will be surprised if it comes much before that.)
> - Russell
We may well have already passed that event horizon, and are merely not yet
aware of it.
-- Daniel Assange <d.assange at ugrad.unimelb.edu.au>
"The way to enlightenment is not to admit that you
know nothing, but to admit that you can never know
More information about the extropy-chat