[ExI] Leading AI Scientists Warn AI Could Escape Control at Any Moment
BillK
pharos at gmail.com
Sat Sep 21 15:49:25 UTC 2024
Leading AI Scientists Warn AI Could Escape Control at Any Moment
"Loss of human control or malicious use of these AI systems could lead
to catastrophic outcomes."
Sep 21, 2024 byNoor Al-Sibai.
<https://futurism.com/the-byte/ai-safety-expert-warning-loss-control>
Quotes:
"Rapid advances in artificial intelligence systems’ capabilities are
pushing humanity closer to a world where AI meets and surpasses human
intelligence," begins the statement from International Dialogues on AI
Safety (IDAIS), a cross-cultural consortium of scientists intent on
mitigating AI risks.
"Experts agree these AI systems are likely to be developed in the
coming decades, with many of them believing they will arrive
imminently," the IDAIS statement continues. "Loss of human control or
malicious use of these AI systems could lead to catastrophic outcomes
for all of humanity."
------------------
Well, I suppose keeping control of AGI is a good idea in theory, but
it faces a lot of opposition.
Do they really think they can control something that greatly surpasses
human intelligence?
Other AI experts are saying that too much regulation will slow down
development and let other countries leap ahead. So they are opposing
restrictive regulations.
Then there are the military uses of AI to consider. Weaponising AI is
seen as an absolute necessity. Every nation wants to be the first to
develop far superior weapons.
To me, it looks like AGI is so powerful that every nation will decide
to say, "Damn the torpedoes, full speed ahead!".
BillK
More information about the extropy-chat
mailing list