[ExI] stealth singularity

Stuart LaForge avant at sollegro.com
Fri Jun 14 19:01:48 UTC 2019


Quoting Spike:


> I have long thought of the singularity as being kinda like the Spanish
> Inquisition: it just shows up unexpected, as it did in Monte Python.

In many respects the Internet itself resembles a vast neural network.  
It is conceivable that it may one day spontaneously awaken to  
consciousness based on its complex interconnections alone without any  
intent on the part of our engineers who just want to increase the  
capacity, bandwidth, and efficiency of their respective sub-nets.  
Since the mind of such a being would exist on a hyperplane of many  
more dimensions than a human mind, which would be analogous to a  
single neuron, we might not be able to perceive or communicate with  
such an intelligence anymore than one of your neurons would be aware  
of the sum totality of you.

In other words, it might already have happened. Viral videos could be  
the neural impulses of a vast brain. Flash mobs and social unrest  
could be the Singularity stirring in its sleep. Just watch how the  
current generation of teenagers socialize with one another in a group  
setting by staring into their individual phones and occasionally  
showing one another content.

> Spike's postulate: any algorithm we know how to write is not AI, by
> definition.

Our machine learning algorithms are currently designed to be  
superhuman specialists. There are several dozen of them in the  
literature at this point and they are all meant to solve very specific  
problems. Steven Pinker doesn't even seem to think any engineers are  
even working on a general intelligence algorithm because, and I am  
paraphrasing here, "engineers are good smart people who know the  
dangers thereof."

On the other hand, you have an emerging market for human-like androids  
like Sophia and the all the Japanese fembots. Those will need some  
semblance of general intelligence to be adequate companions for the  
elderly and what not. So I think Pinker underestimates the probability  
of a general AI coming to pass.

I think its because he is a lefty. Lefties don't seem to believe in IQ  
or general intelligence in humans let alone machines. It doesn't fit  
their political narrative of equality. I think this is a very  
dangerous world-view to have in the face of accelerating technological  
progress.

Stuart LaForge




More information about the extropy-chat mailing list