[ExI] inference paradigm in ai

Adrian Tymes atymes at gmail.com
Mon Sep 4 16:13:42 UTC 2017


On Mon, Sep 4, 2017 at 6:48 AM, spike <spike66 at att.net> wrote:
> This line of reasoning leads me to the discouraging conclusion that even if
> we manage to create inference-enabled software, we would need to train it by
> having it read our books (how else?  What books?) and if so, it would become
> as corrupt as we are.  The AI we can create would then be like us only more
> so.

Books aren't the only source, but this line of reasoning is why I do
not fear the rise of the AIs.  Whether or not mind uploading becomes
possible someday, I suspect our strong-AI children will be essentially
human, not some strange affected robot overlords inherently
antagonistic to humanity.

Now, they may weed out certain inaccurate heuristics over time, as we
have (tried to) weed out things like racism and sexism (which, lest a
reminder be needed, are little more than particularly damaging
inaccurate heuristics: they make predictions that are false too often
to be useful).  Being something that humans do, this too is by
definition a human thing to do, whether or not the AIs carry it to a
degree beyond what unaugmented humans can.  (If being uploaded means I
have memory space for personal details about each of the thousands of
people I interact with in a given month, such that I can care about
them when I interact with them, I'll take it.)



More information about the extropy-chat mailing list