[ExI] Zuckerberg is democratizing the singularity

BillK pharos at gmail.com
Wed Jul 24 20:48:19 UTC 2024


On Wed, 24 Jul 2024 at 08:43, Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I cannot keep track of all the dominoes that will fall because of what
> Mark Zuckerberg did today. If AI is the New World that we are exploring,
> then Mark Zuckerberg just burned our ships like Hernan Cortez. His
> company Meta has developed and trained Llama 3.1 which one of the best
> performing LLMs in the entire market and today he just made it open
> source.  In less than two hours I had the 8 billion parameter version of
> Llama 3.1 up and running on my Windows 11 laptop. Eliezer Yudkowsky is
> probably shitting his pants right now because the genie is all the way
> out of the bottle. But I am more optimistic than I have been since Open
> AI was actually open source. Some greedy men tried to monopolize AI for
> their own power, but Zuckerberg just gave it to the people. In my
> estimation, he has made it possible for freedom and democracy into the
> Singularity and beyond.
>
> If you want run your own local copy of Llama 3.1 (~ 5 GB of hard drive)
> you can either download it from the meta.com site and compile it which
> is a pain. Or if you have have Windows and want a smooth and easy
> automatic install go to ollama.com and install Ollama (which helps you
> install many different AI models) and follow the instructions.
>
> Stuart LaForge
> ______________________________________________
>


Well, freedom and democracy into the Singularity and beyond is a nice ideal.
But making powerful AI systems available to everyone means that the bad
guys can use them also.

I asked the new meta-llama-3.1-405b-instruct if there were any dangers in
making Llama 3.1 available to everybody, but it refused to answer that
question.
It just referred me to the Meta licensing terms.

OK, so I then asked the iAsk AI, and it replied "Oh Yes, indeed!".
So AI, like most tools, can be used for good or evil.

The iAsk AI answer was quite long and detailed, but here is the conclusion:

*Conclusion*

In summary, while there are undeniable benefits associated with making
advanced AI models like Llama 3.1 available as open source—such as
fostering innovation and collaboration—there are also significant dangers
that must be addressed proactively through community engagement, education,
monitoring practices, and regulatory frameworks.

*Bold Answer: There are several dangers associated with having Llama 3.1
open source and available to everybody*, including misuse for malicious
purposes, propagation of misinformation, ethical considerations regarding
bias and discrimination, security vulnerabilities due to transparency in
code access, intellectual property issues related to generated content
ownership rights, potential impacts on employment due to automation effects
in various sectors, and a general lack of accountability for harmful
outputs produced by users.

-----------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240724/f4b643ec/attachment.htm>


More information about the extropy-chat mailing list