[extropy-chat] Singularity econimic tradeoffs
Dan Clemmensen
dgc at cox.net
Mon Apr 19 12:50:17 UTC 2004
Eugen Leitl wrote:
>[I wrote, responding to Samantha's comment that a human is not part of an SI after bbotstrap]
>
>
>>It appears that we are in complete agreement. The seed SI has a human
>>component, this seed SI bootstraps, and eventually (within a week in my
>>scenario) reaches a level at which the human component is no longer
>>useful. My point is that un-augmented humans do not need to create an
>>AI to reach this point.
>>
>>
>
>Can you restate your point? It doesn't follow, at least according to your
>argument flow above.
>
>
>
OK. I think that humans will create a system (or a set of separate
applications) that is
intended to simplify the software implementation task, using brute-force
techniques
rather than AI and leaving the decision-making to the human. I think
that a human will
then use the system to improve the system, and that this bottom-up
approach will lead
to a system that is "smart" enough to augment itself further without
human intervention.
My argument flow is simply that humans can make decisions very rapidly
if the decision
criteria are presented properly.
I do not understand why people think that understanding and emulating
human problem-
solving is the best approach to machine problem-solving. Cars don't
emulate horses, planes
don't emulate birds, and chess computers don't emulate chess masters,
although in each
case inventors tried and failed with an emulation approach.
More information about the extropy-chat
mailing list