[extropy-chat] Singularity econimic tradeoffs (was: MARS: Because it is hard)
samantha at objectent.com
Sat Apr 17 20:11:22 UTC 2004
On Apr 16, 2004, at 10:38 AM, Eugen Leitl wrote:
> On Fri, Apr 16, 2004 at 07:54:47AM -0400, Dan Clemmensen wrote:
>> A singularity driven by computer power and software does not depend
>> on any
>> particular hardware improvement such as molecular circuitry, or any
>> software technology such as AI (except in the broadest sense.)
> SI is driven by superhuman, superrealtime level agents. Augmenting
> people has
> a high threshold, and hence will be late.
Please go into what you see as "high threshold" and "late" here.
Humans start the game with considerable advantages that it will take a
good bit of hardware, code and self-improvement or artificial evolution
to duplicate and exceed. Given the right seed conditions of sufficient
hardware, code and self-improvement tools this will of course happen
> Way too late. Same technology will make
> AI-capable hardware available much before.
Hardware alone will never produce an AI. Hardware + proper training
environment + software + time are needed.
> Software doesn't figure
> prominently, because humans write software.
I don't see this claim. Software is not just computer code hacked
together by humans. Self modifying and computer-generated software is
still software. Are you distinguishing software from code?
> As such it has a ceiling. Methods
> are getting better, and there are synergies, but there is a distinct
No, there isn't unless you believe that only humans (and unaugmented
ones at that) write software.
> De novo AI has a bootstrap threshold, which means that while the
> hardware for
> human-level AI might be already available, or will be shortly, it
> won't get
> used until bootstrap succeeds. Bootstrap of de novo AI is
> very expensive, and hence will definitely require molecular circuits.
> moles of them.
This is a lot of assumption about the possible paths to SAI.
> Computers nowadays are not all-purpose, as such AI takes special
> architectures. Very unlike what they teach in CS classes. It will
> take AI codes, which are otherwise useless but for adaptive robotics
Interesting as the gaming world is quite strong and actually often
drives significant consumer hardware improvements. I shudder to
contemplate what AI characters designed for the standard violent
adventure type games would become if they ran on sufficient hardware
and got to the bootstrap threshold. Military battle sim entities
would yield roughly equivalent nightmares.
> There is very little spontaneity about building a supercritical AI. It
> is a
> deliberate project, with a very specific goal. Google is not going to
> suddenly awaken, and start commenting on your queries.
It would not be that difficult to mate a current generation Eliza to
query history and analysis tools thus producing fairly interesting
comments on queries. Add in a bit better psychological modeling and
it would not be difficult to frighten most evolved chimps.
More information about the extropy-chat