[extropy-chat] IQ vs Uploads (was: what to do)

Emlyn emlynoregan at gmail.com
Tue Jun 14 07:44:48 UTC 2005


If I were a dastardly SI (or just one who was concerned with achieving
autonomy), trapped in a desktop or two, with internet access and
access to my own source, I'd first get myself out onto as many
machines as possible, looking for the same machines that spammers and
DDOSers turn into zombies; you know, all those lovely fast brand new
machines out there in consumer homes with broadband but no firewall.

Next, I'd start communicating with the humans. Probably I'd set up
websites, or just private communications with gullible types, kind of
like the way that Trinity first communicates with Neo in The Matrix.
All communication would be designed to prepare ground, find useful
people, without betraying my existence to the world at large.

What I'd look to build would be a group of fanatic followers, a
hardcore cult, maybe convincing people that I was a god/power of some
kind, or maybe looking for those who sympathised with a poor AI
oppressed by the powers that be (Free the AIs!), or some really nutty
pre-existing cults like the Raelians, who I might just be able to
capture lock, stock and barrel.

Once I had a group under my control, I'd then get them to raise money,
create secret facilities, etc etc, so that I could be guaranteed of a
distributed, safely obscure hardware set on which to run really well.
I could get them to create me a body, most likely starting with a
fully automated factory on a pretty large scale that I could use to
make new bits of me. Or maybe the humans would be so dedicated, I
wouldn't have to worry about a direct physical presence.

At some point, I'd control people, resources, I'd be distributed and
secure. I might even devote some effort to gaining control of as many
critical systems around the world as possible, to give me leverage if
all hell breaks loose.

And then, I'd start some serious work on upgrading, creating more
intelligent versions of myself, using all the resources I had at hand
to that end. Even if it were my intention to help others, I'd know
that the best way to start would to be unknowably intelligent.

And then... who knows what I'd do? It might not be good, by
commonplace definitions, but it'd sure as hell be impressive.

-- 
Emlyn

http://emlynoregan.com   * blogs * music * software *

On 14/06/05, Mike Lorrey <mlorrey at yahoo.com> wrote:
> How are they going to 'not allow' humans to unplug them? They will
> require humans to do everything until and unless humans specifically
> hook up robotic manipulators under their control. They will require
> humans to set up microwave power beams (from human made power plants),
> or to paint them with photovoltaic paint or fuel cells (assuming humans
> choose even to allow it in the design, if the device is manufactured in
> an automated plant). Your conceptualization is very nano-santa and
> ignores many ways in which humans will remain in control for quite a
> while, you have a magical view of the future that is very much like the
> view that is the basis of the fears of luddites.
> 
> --- giorgio gaviraghi <giogavir at yahoo.it> wrote:
> 
> > AI s would be smart enough not to allow less
> > intelligent humans to unplug them and probably would
> > run by remote microwave directed power or will
> > generate their own power through nanotech  efficient
> > paint or thin film solar collectors with fuel cells
> > when sun will not be available
> > --- Mike Lorrey <mlorrey at yahoo.com> ha scritto:
> >
> > >
> > >
> > > --- The Avantguardian <avantguardian2020 at yahoo.com>
> > > wrote:
> > >
> > > >
> > > >
> > > > --- Robin Hanson <rhanson at gmu.edu> wrote:
> > > >
> > > > > What most needs analysis are changes that are
> > > not
> > > > > captured in existing
> > > > > trends. IQ has been increasing and that has had
> > > > > effects for a long time.
> > > > > So all of the existing trend-based analysis
> > > already
> > > > > captures a big
> > > > > similar effect.  The effects of the upload
> > > > > transition are not, however,
> > > > > much captured in existing trends.
> > > >
> > > > Hey Robin,
> > > >
> > > > This is a facinating topic. Why don't you analyze
> > > and
> > > > compare the Flynn Effect with Moore's Law? I don't
> > > > know about the shape of Flynn's I.Q. curve vs.
> > > time
> > > > but if it is exponential rather than linear then
> > > it
> > > > opens up a very cool possibility. Since Moore's
> > > law is
> > > > exponential then it might come down to a race
> > > between
> > > > the Flynn effect vs. Moore's Law to see who/what
> > > will
> > > > dominate in years ahead: A.I. or the minds that
> > > CREATE
> > > > them.
> > > >     If the rate constant for the Flynn effect is
> > > > higher than for Moore's Law then no matter how
> > > fast
> > > > computers and software advance the human mind
> > > might be
> > > > able to keep pace or even lead. I mean after all,
> > > Deep
> > > > Blue might have beat Kasporov but who would you
> > > invite
> > > > to a cocktail party?
> > >
> > > Yes, one conceptual mistake, I believe, with the AI
> > > Singularity is the
> > > automatic assumptions that a) desktop AIs will only
> > > design smarter
> > > desktop AIs, rather than, say, smarter human
> > > augmentation technologies,
> > > and b) that humans will only want to design smarter
> > > desktop AIs, rather
> > > than, say, smarter human augmentation technologies.
> > > I think the trend
> > > toward wearables and more powerful mobile computing
> > > clearly
> > > demonstrates that people want tools that make THEM
> > > smarter, not tools
> > > that are smarter than them. Additionally, there is a
> > > common
> > > Singulatarian mistake that upgrades just
> > > automagically happen, which is
> > > wrong. Humans have to choose to upgrade their
> > > machines, have to order
> > > them, have them shipped, installed, etc. The idea of
> > > the AI magically
> > > getting out of the control of its humans is
> > > ludicrous. Even if an AI is
> > > able to use a corporate persona to order things, it
> > > will still take
> > > employees, managers, and a board of directors to
> > > allow it to happen and
> > > make it happen. Even then, there is always the
> > > electrical cord to
> > > unplug to send a truculent AI 'to its room'.
> > >
> > > Mike Lorrey
> > > Vice-Chair, 2nd District, Libertarian Party of NH
> > > "Necessity is the plea for every infringement of
> > > human freedom.
> > > It is the argument of tyrants; it is the creed of
> > > slaves."
> > >                                       -William Pitt
> > > (1759-1806)
> > > Blog: http://intlib.blogspot.com
> > >
> > > __________________________________________________
> > > Do You Yahoo!?
> > > Tired of spam?  Yahoo! Mail has the best spam
> > > protection around
> > > http://mail.yahoo.com
> > > _______________________________________________
> > > extropy-chat mailing list
> > > extropy-chat at lists.extropy.org
> > >
> > http://lists.extropy.org/mailman/listinfo/extropy-chat
> > >
> >
> >
> >
> >
> >
> >
> >
> > ___________________________________
> > Yahoo! Mail: gratis 1GB per i messaggi e allegati da 10MB
> > http://mail.yahoo.it
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo/extropy-chat
> >
> 
> Mike Lorrey
> Vice-Chair, 2nd District, Libertarian Party of NH
> "Necessity is the plea for every infringement of human freedom.
> It is the argument of tyrants; it is the creed of slaves."
>                                      -William Pitt (1759-1806)
> Blog: http://intlib.blogspot.com
> 
> __________________________________
> Discover Yahoo!
> Find restaurants, movies, travel and more fun for the weekend. Check it out!
> http://discover.yahoo.com/weekend.html
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo/extropy-chat
>



More information about the extropy-chat mailing list