[ExI] Feasibility of solid Dyson Sphere WAS mbrains again: request

Dennis May dennislmay at yahoo.com
Fri Sep 30 16:47:28 UTC 2011


John Grigg wrote:
 
> Why could an Mbrain not continually engage in 
> technological progress to improve it's defenses 
> and keep pace with potential enemies?  
 
The cost of WoMD is much less than the cost of
defenses against them.
 
John Grigg wrote:

> Why not a combination of the two concepts, 
> where you have an Mbrain that in a time of 
> crisis can disperse into much smaller units,
> activate stealth mode for all of them, and then 
> flee/fight if necessary to various points, until 
> it is safe to reassemble and regain
> the benefits of Mbrain computational power.
 
If you have time to disperse and go stealth is the 
question.  A speed of light attack with no
warning would negate any such planning.  Only
existing in SND mode all the time can limit such
attacks.
 
Even under SND huge computational power will
be available.  I am not certain we can project
advantages more than a few magnitudes beyond
what the human brain and currently technology
can do.  Does a cluster of scattered AI say 10**4 
beyond human really have a disadvantage over
a centralized AI of say 10**6  beyond human.
At what point does adding more and more
capability produce diminishing returns?  Is there
something a 10**20 brain can do that that many
lesser brains cannot?
 
I know that at our scale better and bigger brains
do prove themselves.  We also see that many of
our best brains have mental health issues.
How does mental health in AI scale with size
and capability?  It is really a question of stability
during scaling.  If you put everything into a single
Mbrain and it is mentally unstable is that a foolish
investment?
 
Dennis May
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110930/d11cd246/attachment.html>


More information about the extropy-chat mailing list