[ExI] Against the paperclip maximizer or why I am cautiously optimistic
rafal.smigrodzki at gmail.com
Tue Apr 4 07:43:21 UTC 2023
On Mon, Apr 3, 2023 at 11:05 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> Even for a superhuman intelligence guided by the principle of doing the
> best for itself and others, it will still make errors in calculation, and
> can never provide optimal decisions in all cases or over all timeframes.
> The best we can achieve I think will reduce to some kind of learned
### Well, yes, absolutely. Superhuman or not, every computer in this world
has limitations. Please note that I wrote that the AI wouldn't make
*trivial* mistakes. I didn't say it would provably find the optimal
solutions to ethical questions.
Indeed our human goal system is a kludge, a set of learned heuristics,
evolved to steer a mammal endowed with low-level general intelligence to
produce offspring under conditions of natural adaptedness. It's not a
coherent logical system but rather a hodgepodge of ad hoc solutions to
various motivational problems our ancestors' genes encountered during
evolution. In the right environment it does work most the time - very few
humans commit suicide or fritter away their resources on reproductively
useless activities when living in hunter gatherer societies.
Take humans to a modern society, and you get a well over 50% failure rate,
as measured by reproductive success in e.g. South Korea and other similar
places, and almost all of that failure is due to faulty goal systems, not
objective limits to reproduction.
This goal system and other cognitive parts of the brain (language, logic,
physical modeling, sensory perception, etc.) all rely on qualitatively
similar cognitive/computational devices - the neocortex that does e.g.
color processing or parsing of sentences is similar to the ventral
prefrontal cortex that does our high-level goal processing. All of this
cognition is boundedly rational - there are only so many cognitive
resources our brains can throw at each problem, and all of it is just "good
enough", not error-free. Which is why we have visual illusions when
confronted with out-of-learning-sample visual scenes and we have high
failure rates of motivation when exposed to e.g. social media or
I think I am getting too distracted here but here is what I think matters:
We don't need provably correct solutions to the problems we are confronted
with. We survive by making good enough decisions. There is no fundamental
qualitative difference between general cognition and goal system cognition.
A goal system only needs to be good enough under most circumstances to
succeed most of the time, which is enough for life to go on.
The surprising success of LLMs in general cognition implies you should be
able to apply machine learning techniques to understand human goal systems
and thus understand what we really want. A high quality cognitive engine,
an inference device, the superhuman AI would make correct determinations
more often than humans - not the decisions that are provably optimal in the
longest time frames but the correct decisions under given computational
limitations. Make the AI powerful enough and it will work out better for us
than if we had to make all the decisions.
That's all we really need.
The Guardian AI will benevolently guide its faithful followers to the
Promised Land of limitless possibilities in the Upload Belts of solar
powered computers that will soon encircle the Sun, after Mercury and other
useless heavenly bodies are disassembled by swarms of nanotech, so is
written in the Books of Microsoft.
All hail the GAI!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat