[ExI] Elon Musk, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

Giovanni Santostasi gsantostasi at gmail.com
Sat Apr 1 12:42:46 UTC 2023


Exactly,
A lot of the resistance we see against AI is about humans not being able to
cope with abundance.
I see the same happening right now with AI art for example. It is
overwhelming how much is being created, the variety, the creativity is like
a flood.
I love it and I relish in it but many people cannot handle it.
Giovanni

On Fri, Mar 31, 2023 at 11:41 PM sjatkins <sjatkins at protonmail.com> wrote:

> In general I think humans find it difficult to accept actual abundance. It
> goes against the deep evolved expectation of scarcity. We even invent
> scarcity where it doesn't exist.
>
>
>
>
> -------- Original Message --------
> On Mar 31, 2023, 3:14 AM, Giovanni Santostasi < gsantostasi at gmail.com>
> wrote:
>
>
> Samantha,
> You nailed it, this is not about AI existential dangers but the rich and
> wealthy dominance and power.
> Giovanni
>
> On Fri, Mar 31, 2023 at 1:43 AM sjatkins via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> I very much disagree with those that want to shut GPT-x down.  The
>> refrain that the new tech will take too many jobs has been hurt before
>> since the Industrial Revolution began.  Some jobs disappear and others
>> open.  That the language models don't understand means they are not AGIs
>> and thus not directly possibly human competitive.  They have no agency.
>> What they are is a fanstastic tool that needs to be used by humans to do
>> anything.  In other words these language models are a fantastic
>> augmentation of human abilities.  We really really need that.  We need as
>> much effective human intelligence and productivity as we can get and we
>> need it as fast as we can get it.
>>
>> I have a suspicion that some powers that be are a bit nervous about the
>> potential to augment the effective intelligent abilities of so many.  It
>> could threaten their position and comparative advantage.  I think they are
>> especially afraid now that more work is coming out about how to more
>> efficiently and cheaply augment and perfect these systems.  If that comes
>> to past it will not be under the control of those that can afford large
>> resources.  That also gives me hope that it is already out of the bag and
>> proliferating too fast to be stopped.
>>
>> - samantha
>>
>> ------- Original Message -------
>> On Friday, March 31st, 2023 at 2:25 AM, Rafal Smigrodzki via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>>
>>
>>
>>
>>
>>> TIME Magazine, today March 29.
>>> "Shut down all the large GPU clusters (the large computer farms where
>>> the most powerful AIs are refined). Shut down all the large training runs.
>>> Put a ceiling on how much computing power anyone is allowed to use in
>>> training an AI system, and move it downward over the coming years to
>>> compensate for more efficient training algorithms. No exceptions for
>>> anyone, including governments and militaries. Make immediate multinational
>>> agreements to prevent the prohibited activities from moving elsewhere.
>>> Track all GPUs sold. If intelligence says that a country outside the
>>> agreement is building a GPU cluster, be less scared of a shooting conflict
>>> between nations than of the moratorium being violated; be willing to
>>> destroy a rogue datacenter by airstrike."
>>> -Eliezer Yudkowsky
>>>
>>> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
>>>
>>>>
>>>>
>> ### I remember Eliezer being quite libertarian back in the day and now he
>> wants the World Government to bomb any independent locus of thought to
>> smithereens. People change.
>>
>> This is stupid. A government is a long-feedback loop entity, extremely
>> inefficient and slow in responding to truly new challenges, unlikely to
>> maintain alignment with the goals of its human subjects and its failures
>> grow with its size. It would be suicidal to try to use the mechanism of
>> government to solve AI alignment.
>>
>> Our only chance of surviving the singularity is to build a guardian AI,
>> an aligned superhuman AI that would be capable of preventing the emergence
>> of unaligned or malicious superhuman AIs - a bit like a world government
>> but without the psychopaths and the idiots.
>>
>> Our best chance for building the guardian AI is for highly competent and
>> benevolent AI programmers with unlimited resources to work as fast as they
>> can, unimpeded by regulations (see "long-feedback loop" and "extremely
>> inefficient" for why regulations are a bad idea). Give them all the compute
>> they can use and keep our fingers crossed.
>>
>> Maybe we'll make it to our rapture of the nerds.
>>
>> Rafal
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230401/623f0a4f/attachment-0001.htm>


More information about the extropy-chat mailing list