[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Gordon Swobe gordon.swobe at gmail.com
Wed Mar 29 21:29:49 UTC 2023


Is there any debate that AI development and deployment needs regulatory
oversight? That is one reason for the proposed pause.

We have a similar situation in crypto, where I have focused most of my
attention in recent years. It’s the wild wild west. Some of the most
libertarian people in the community want to keep it that way — I call them
cryptoanarchists — and others like me want clear regulations.

-gts

-gts

On Wed, Mar 29, 2023 at 8:44 AM Gadersd via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> The Chinese would be very grateful for Western hiatus on AI development.
> People are yelling and screaming on the dangers of AI, but no one can stop
> the golden dragon. It’s too potentially lucrative. Try dangling a slab of
> meat over a pack of starving wolves and just try telling them to be patient.
>
> On Mar 29, 2023, at 2:11 AM, Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> I agree and am glad to see this development. As I have argued here, these
> language models literally have no idea what they are talking about. They
> have mastered the structures of language but have no grounding. They are
> blind software applications with no idea of the meanings of the words and
> sentences they generate. If they were human, we would call them sophists.
>
> From the letter:
>
> --
> Contemporary AI systems are now becoming human-competitive at general
> tasks,[3] and we must ask ourselves: Should we let machines flood our
> information channels with propaganda and untruth? Should we automate away
> all the jobs, including the fulfilling ones? Should we develop nonhuman
> minds that might eventually outnumber, outsmart, obsolete and replace us?
> Should we risk loss of control of our civilization? Such decisions must not
> be delegated to unelected tech leaders. Powerful AI systems should be
> developed only once we are confident that their effects will be positive
> and their risks will be manageable. This confidence must be well justified
> and increase with the magnitude of a system's potential effects. OpenAI's
> recent statement regarding artificial general intelligence, states that "At
> some point, it may be important to get independent review before starting
> to train future systems, and for the most advanced efforts to agree to
> limit the rate of growth of compute used for creating new models." We
> agree. That point is now.
>
>
> Therefore, we call on all AI labs to immediately pause for at least 6
> months the training of AI systems more powerful than GPT-4. This pause
> should be public and verifiable, and include all key actors. If such a
> pause cannot be enacted quickly, governments should step in and institute a
> moratorium.
> --
> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20
>
> -gts
>
> _______________________________________________
>
>
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230329/299e274a/attachment.htm>


More information about the extropy-chat mailing list