[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Will Steinberg steinberg.will at gmail.com
Thu Mar 30 03:15:29 UTC 2023


I think it's fair to say that haphazardly developing tech that even has
possible total existential risk associated with it is bad.  We almost
killed ourselves with nukes in the middle of the last century.  I don't
think avoiding that is religious zealotry against AI, it's just sensible.

I don't think we will (or even can) shut down this process, though.  We
probably need to use 'dumb AI' to help us figure out alignment problems for
'smart AI' NOW.  Not sure if we can develop a good plan quickly enough
otherwise.

On Wed, Mar 29, 2023 at 11:10 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> What a stupid idea.
> Fear mongering at its best. This cannot be stopped and it should not be
> stopped. AI will actually change and solve most of our problems, like most
> of technology did over time.
> This is the last bastion for the religious and superstitious minds. They
> fear the supremacy of humans over intelligence could be over so their
> entire vision of the world is collapsing. They want it both ways, like
> Gordon. The AI do not understand and they are faking intelligence and
> meaning and at the same time they are dangerous and they take the world.
> Such an irrational and imaginative way of thinking.
> Giovanni
>
>
> On Tue, Mar 28, 2023 at 11:14 PM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I agree and am glad to see this development. As I have argued here, these
>> language models literally have no idea what they are talking about. They
>> have mastered the structures of language but have no grounding. They are
>> blind software applications with no idea of the meanings of the words and
>> sentences they generate. If they were human, we would call them sophists.
>>
>> From the letter:
>>
>> --
>> Contemporary AI systems are now becoming human-competitive at general
>> tasks,[3] and we must ask ourselves: Should we let machines flood our
>> information channels with propaganda and untruth? Should we automate away
>> all the jobs, including the fulfilling ones? Should we develop nonhuman
>> minds that might eventually outnumber, outsmart, obsolete and replace us?
>> Should we risk loss of control of our civilization? Such decisions must not
>> be delegated to unelected tech leaders. Powerful AI systems should be
>> developed only once we are confident that their effects will be positive
>> and their risks will be manageable. This confidence must be well justified
>> and increase with the magnitude of a system's potential effects. OpenAI's
>> recent statement regarding artificial general intelligence, states that "At
>> some point, it may be important to get independent review before starting
>> to train future systems, and for the most advanced efforts to agree to
>> limit the rate of growth of compute used for creating new models." We
>> agree. That point is now.
>>
>>
>> Therefore, we call on all AI labs to immediately pause for at least 6
>> months the training of AI systems more powerful than GPT-4. This pause
>> should be public and verifiable, and include all key actors. If such a
>> pause cannot be enacted quickly, governments should step in and institute a
>> moratorium.
>> --
>> https://twitter.com/SmokeAwayyy/status/1640906401408225280?s=20
>>
>> -gts
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230329/fffe51a1/attachment-0001.htm>


More information about the extropy-chat mailing list