[ExI] another open letter, but this one may be smarter than the previous one

Stuart LaForge avant at sollegro.com
Sun Apr 30 19:31:02 UTC 2023


Quoting Will Steinberg via extropy-chat <extropy-chat at lists.extropy.org>:

> Yeah, I kind of doubt we will make any meaningful progress on either of
> those descriptions of consciousness before AGI exists.  (And I *like* woo
> shit) Phenomena are completely inscrutable within the modern physics
> framework, and even for the "entity that can model itself" form, what does
> it even mean for a computer program to model itself like humans model
> ourselves?  It has no place, no body.  We don't even understand what is
> inside these LLMs in the first place...

Sure we do. A few million 300-dimensional vectors, representing words,  
organized into clusters in hyperspace to represent statistical  
relationships that act as proxy for meaning, or simply are meaning,  
depending on ones point of view. It is nature that is the blind  
watchmaker. We create with our eyes wide open.

> What a terrifying time to be alive.  I don't see a plausible scenario where
> this all doesn't lead to unbelievable amounts of suffering (of both
> biological and machine consciousness.)

In the best of times, some suffering is unavoidable, let alone in  
times of great change. But fear in anticipation of suffering is  
premature suffering begun needlessly. Be at peace, you have all the  
tools you need to survive this.

https://www.youtube.com/watch?v=ZA9K0JMrbWg

In the video, Dr. Li Jiang is a director of Stanford AIRE program  
gives some practical tips on how to survive the era of generative AI.

Stuart LaForge






More information about the extropy-chat mailing list