[ExI] scioly trained by chat

spike at rainier66.com spike at rainier66.com
Wed Jan 18 20:12:00 UTC 2023


 

 

 

Some scientists, such as  <https://en.wikipedia.org/wiki/Stephen_Hawking>
Stephen Hawking and  <https://en.wikipedia.org/wiki/Stuart_J._Russell>
Stuart Russell, have articulated concerns that if advanced AI someday gains
the ability to re-design itself at an ever-increasing rate, an unstoppable "
<https://en.wikipedia.org/wiki/Intelligence_explosion> intelligence
explosion" could lead to  <https://en.wikipedia.org/wiki/Human_extinction>
human extinction. Musk characterizes AI as humanity's "biggest existential
threat." <https://en.wikipedia.org/wiki/OpenAI#cite_note-29> [29] OpenAI's
founders structured it as a non-profit so that they could focus its research
on making positive long-term contributions to humanity.
<https://en.wikipedia.org/wiki/OpenAI#cite_note-bbc-giants-5> [5]

Musk and Altman have stated they are partly motivated by concerns about the
<https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_inte
lligence> existential risk from artificial general intelligence.
<https://en.wikipedia.org/wiki/OpenAI#cite_note-csmonitor-30> [30]
<https://en.wikipedia.org/wiki/OpenAI#cite_note-wired_inside-28> [28] OpenAI
states that "it's hard to fathom how much human-level AI could benefit
society," and that it is equally difficult to comprehend "how much it could
damage society if built or used incorrectly".
<https://en.wikipedia.org/wiki/OpenAI#cite_note-bbc-giants-5> [5] Research
on safety cannot safely be postponed: "because of AI's surprising history,
it's hard to predict when human-level AI might come within reach."
<https://en.wikipedia.org/wiki/OpenAI#cite_note-31> [31] 

 

 

From: spike at rainier66.com <spike at rainier66.com> 
>
 

>…They will deserve it.  I will cheerfully pay a subscription fee, if it is
in the low-ish triple digits per year.

 

>…The six founders are in a good position to become the world’s first
trillionaires.

 

spike

 

 

 

The Wikipedia article on OpenAI is worth the few minutes to read:

 

https://en.wikipedia.org/wiki/OpenAI

 

Comment from the site:

 

Some scientists, such as  <https://en.wikipedia.org/wiki/Stephen_Hawking>
Stephen Hawking and  <https://en.wikipedia.org/wiki/Stuart_J._Russell>
Stuart Russell, have articulated concerns that if advanced AI someday gains
the ability to re-design itself at an ever-increasing rate, an unstoppable "
<https://en.wikipedia.org/wiki/Intelligence_explosion> intelligence
explosion" could lead to  <https://en.wikipedia.org/wiki/Human_extinction>
human extinction. Musk characterizes AI as humanity's "biggest existential
threat." <https://en.wikipedia.org/wiki/OpenAI#cite_note-29> [29] OpenAI's
founders structured it as a non-profit so that they could focus its research
on making positive long-term contributions to humanity.
<https://en.wikipedia.org/wiki/OpenAI#cite_note-bbc-giants-5> [5]

Musk and Altman have stated they are partly motivated by concerns about the
<https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_inte
lligence> existential risk from artificial general intelligence.
<https://en.wikipedia.org/wiki/OpenAI#cite_note-csmonitor-30> [30]
<https://en.wikipedia.org/wiki/OpenAI#cite_note-wired_inside-28> [28] OpenAI
states that "it's hard to fathom how much human-level AI could benefit
society," and that it is equally difficult to comprehend "how much it could
damage society if built or used incorrectly".
<https://en.wikipedia.org/wiki/OpenAI#cite_note-bbc-giants-5> [5] Research
on safety cannot safely be postponed: "because of AI's surprising history,
it's hard to predict when human-level AI might come within reach."
<https://en.wikipedia.org/wiki/OpenAI#cite_note-31> [31] 

 

When I read Musk’s comments about research in friendly AI increasing the
risk of accidentally creating unfriendly AI, I am reminded of what is now a
plausible (tweetable) notion that the research to evolve a harmless version
of corona virus may be what caused Covid 19.  Until Musk bought Twitter,
that notion was non-tweetable.  It would get one’s Twitter account
suspended.  I had no Twitter account at that time but I didn’t get spanked
when I posted the theory on ExIchat in spring of 2020.

 

spike

-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 11434 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230118/b06ab937/attachment.bin>


More information about the extropy-chat mailing list