[ExI] Existential risk of AI
spike at rainier66.com
spike at rainier66.com
Tue Mar 14 15:06:35 UTC 2023
-----Original Message-----
From: spike at rainier66.com <spike at rainier66.com>
...
>>...Or rather, how do we deal with an AGI intelligence that looks on
>humans as dog-level intelligences?
BillK
_______________________________________________
>...Ja, BillK, there is an in-between stage here. Currently our proto-AIs
don't have their own will, but dogs do, and we guide their will to do what
we want. Before, long before we get to AGI superior to humans, we will be
training sub-AIs, dog level AIs.
>...Then... as the software gets smarter, so do we.
spike
If I may stretch the K9 analogy a little further please: The veteran K9
trains the recruits by their watching the veteran carry out tasks at the
command of the old man. In no case does the veteran dog take out recruits
and attempt to train him without the old man barking the commands (that
would be interesting and somewhat disturbing to see, if it ever happened.)
What we are theorizing with AGI is that software will train other software
without human intervention.
My notion is that long before that happens, we will discover better ways to
train software than our current method, which involves writing actual
software. We will develop a kind of macro language for writing higher level
software.
spike
More information about the extropy-chat
mailing list