[ExI] AI thoughts

Kelly Anderson postmowoods at gmail.com
Wed Dec 27 06:11:12 UTC 2023


On Tue, Dec 26, 2023 at 9:42 PM Keith Henson via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> I wonder if AI might be less dangerous and less useful than it seems.
> Intelligence is the application of knowledge to making decisions. The
> LLM to date have depended on human knowledge for training. The point
> here is that there may be a limit to knowledge. For example, all the
> knowledge in the universe is not going to find a new element between
> carbon and nitrogen.
>
Keith,

  What you are describing is the state of human knowledge as well. How
do humans go beyond what humans already know? All that AI requires is
some kind of optimization function that would lead it to believe it
benefited from learning/inventing new things/ideas. Some combination
of status seeking, curiosity, making money (a subset of status), other
"feel good" rewards, and I'm sure it would explode into the infospace
in a thousand directions people haven't had time to investigate.

Here's a small sample of a simple system that's doing things people
have never thought of... and it incorporates robots along the way to
perform experiments.
"Google AI and robots join forces to build new materials"
"Tool from Google DeepMind predicts nearly 400,000 stable substances,
and an autonomous system learns to make them in the lab."
https://www.nature.com/articles/d41586-023-03745-5#:~:text=An%20autonomous%20system%20that%20combines,in%20batteries%20or%20solar%20cells.

While this MAY not be dangerous... it will develop materials that
could be used in dangerous ways, for sure.

-Kelly



More information about the extropy-chat mailing list