[extropy-chat] Singularity econimic tradeoffs (was: MARS: Because it is hard)
eugen at leitl.org
Sun Apr 18 10:47:12 UTC 2004
On Sat, Apr 17, 2004 at 01:11:22PM -0700, Samantha Atkins wrote:
> >SI is driven by superhuman, superrealtime level agents. Augmenting
> >people has
> >a high threshold, and hence will be late.
> Please go into what you see as "high threshold" and "late" here.
High threshold implies invasive medical nanoware. This is an extremely hairy,
complex technology. Computronium (3d integrated molecular circuitry) is
trivial in comparison. We already have printable organic and polymer
electronics, and first instances of spintronics (spin flipping saves power
and spintronics is usually nonvolatile, which also saves power). Predictions
are cheap, so I predict this is how 3d nanoelectronics is going to happen:
multilayer deposition of submicron organic structures. They don't need to be
quick, if you can print them by several m^2/s from a single 100 k$ rig,
especially on top of each other.
> Humans start the game with considerable advantages that it will take a
> good bit of hardware, code and self-improvement or artificial evolution
> to duplicate and exceed. Given the right seed conditions of sufficient
Absolutely. But humans have more or less constant (weakly augmentable) mental
capacities, without resorting to abovementioned CNS-invading medical
> hardware, code and self-improvement tools this will of course happen
> extremely quickly.
Yes. Self-enhancement is also a high-threshold phenomenon. Of course, people
are pretty smart, and they can integrate statistical algorithms for low-level
enhancement, whether at hardware or software level.
> > Way too late. Same technology will make
> >AI-capable hardware available much before.
> Hardware alone will never produce an AI. Hardware + proper training
> environment + software + time are needed.
I do not make a major distinction between hardware and software, lumping both
into "the system". In fact I actively discourage use of "software" in context of
fine-grained cellular hardware, rather preferring the term "state".
Why is hardware a bottleneck? 1) Biological emulation or biologically
inspired (spiking finite-state automata networks) run awfully on conventional
systems. They only become interesting at high degree of cellularity (ideally,
one network node/hardware node), or a large number of fat nodes. 2) de novo
AI requires plowing through lots of sterile search space landscapes, before latching upon
a sweet spot. Finding that Eden configuration will require a lot of crunch.
Way more hardware than is required to run a well-adjusted AI.
> > Software doesn't figure
> >prominently, because humans write software.
> I don't see this claim. Software is not just computer code hacked
> together by humans. Self modifying and computer-generated software is
> still software. Are you distinguishing software from code?
Peope are barely smart enough to write a seed. Evolutionary systems and
superrealtime simulators only look simple from a bird's eye of view. The
problem with simulators is performance, the problem with evolutionary
framework is both performance and flexibility (sufficiently flexible to
evolve to learn how to evolve).
> > As such it has a ceiling. Methods
> >are getting better, and there are synergies, but there is a distinct
> No, there isn't unless you believe that only humans (and unaugmented
> ones at that) write software.
Yes, humans write software. Code. Whatever. Machines do something different.
> This is a lot of assumption about the possible paths to SAI.
Of course. Most of them are "educated", though.
> Interesting as the gaming world is quite strong and actually often
> drives significant consumer hardware improvements. I shudder to
Game AI doesn't need to see, having direct access to internal data structures
representing the game world. It does learn though, and you need a lot of
crunch to render photorealistic movies, some of which will be hopefull
hijackable for all-purposeish computations.
> contemplate what AI characters designed for the standard violent
> adventure type games would become if they ran on sufficient hardware
> and got to the bootstrap threshold. Military battle sim entities
> would yield roughly equivalent nightmares.
Yes. Both industry/household and military robotics need to deal with the real
world, though. Any real-world agnostic system has just fallen within its
virtual navel, and can't get out. We obviously don't have to deal with such
> >There is very little spontaneity about building a supercritical AI. It
> >is a
> >deliberate project, with a very specific goal. Google is not going to
> >suddenly awaken, and start commenting on your queries.
> It would not be that difficult to mate a current generation Eliza to
> query history and analysis tools thus producing fairly interesting
> comments on queries. Add in a bit better psychological modeling and
Even a Markovbot empirically passes Turing, at least as far irc #channel
people are concerned. Try it, it's hilarious.
> it would not be difficult to frighten most evolved chimps.
I'm not frightened by chatbots. Machine vision good enough to track targets
in the real world and hit them gives me the willies, though.
Given that my postponed email queue is still pretty full, and my vacation almost
over (with some stuff yet to do today) I'm probably going to have to cut a
lot of dangling ends. Damn.
Sorry about that, you probably know who you are.
Eugen* Leitl <a href="http://leitl.org">leitl</a>
ICBM: 48.07078, 11.61144 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 198 bytes
Desc: not available
More information about the extropy-chat