[ExI] The Actual Visionary of the Future
eugen at leitl.org
Fri Oct 25 19:51:15 UTC 2013
On Fri, Oct 25, 2013 at 11:48:25AM -0600, Kelly Anderson wrote:
> Even Ray doesn't expect Moore's Law to run indefinitely. In his book TSIN,
Ray thinks everything is exponential.
> he talks about the ultimate physical limitations of computing with matter.
> If we stay on the current track, as Ray predicts, we will hit limitations
> of physics in a dozen decades or so. The interesting point for humanity is
We are constantly running into limits, but people have a short memory.
Nobody remembers the time when the clocks stopped doubling. People
forget that hard drives stopped doubling, at least for a short while.
People are unaware of finer points like
> that point puts computers at many billions of times smarter than us in a
Many billions times smarter than us, using which metric?
> relatively short time frame compared to the ultimate limits.
My personal time is very short, probably less than 40 years if
I'm lucky. My personal interests lie squarely with a working
suspend function, the future better get the resume part
> I know you MUST believe that computers will continue to get faster, even if
> they don't quite keep up the doubling pace. Right?
What does faster mean? The only reliable way is a benchmark.
The only really relevant benchmark is the one that runs your
problem. As such faster never kept up with Moore's law.
Moore's law is about affordable transistors, and that one
gives you a canvas for your potential. So your gains are less
than Moore, and sometimes a lot less. For me personally,
the bottleneck in classical (GPGPU is classical) computers
is worst-case memory bandwidth. Which is remarkably sucky,
if you look it up.
Ok. So, now your transistor budget no long doubles in
constant time, but that time keeps increasing. It's roughly
three years by end of this year, no longer 18 months.
Physical feature size limits are close behind, and your
Si real state is a 400 mm pizza, max. WSI gives you a
factor of two by making yield quantitative, but it wrecks
havoc to your computation model, because grain size starts
being tiny (less than mm^2), and asks for asynchronous
shared-nothing, and did I mention fine-grained? So no
TBytes of RAM for your LUT. The next step is FPGA, as in
runtime reconfigurable. That *might* give you another
factor of 2, or maybe even 4. Stacking is off-Moore, but
it will do something, particularly giving cache-like
access to your RAM, as long as it's few 10 MBytes max.
And then you have to go real 3d, or else there's gap.
My guess the gap is somewhere 15-20 years long, but
we've got maybe 10 years until saturation curve is pretty
> > The trouble with cornucopians like Kurzweil is that they
> > cheerfully and cherrypickingly apply LAR to anything under
> > the sun, and never admit it when reality disproves them.
> I'd be happy to admit when reality disproves anything. Ray has never
> applied LAR to oil seeking technology, for example, as it just doesn't
He implicitly implied we'll run on 100% of thin-film PV in 16 years.
That was 2011, so make that 14 years. This means 4.2 TWp/year just
for power in a linear model, nevermind matching synfuel capability
(try doubling that, after all is sung and done -- 8 shiny TWp/year).
We're not getting the linear model. In fact, we arguably sublinear,
> apply. However, in a sense it does apply. We do get some percent better at
> extracting what's left each year. That doesn't mean we get an exponential
No, in terms of net energy we're not getting better. We're actually
> amount of oil, since there's a limited amount of the stuff. But it does
> mean that we get exponentially better at finding what's left (note that
We're not getting better. We've mapped all the stuff, there are almost
no unknown unknowns. And dropping EROEI and even dropping volume
(not net energy, volume!) per unit of effort is pretty much the
opposite of exponential. Do 40% of decay rate/well/year mean a
thing to you?
> this curve is likely much more gentle than computing, with a doubling of
> reserves we can get at maybe every 20 or 50 years. I don't know.)
There are no exponentials in infrastructure. There is an early
sigmoidal that looks that way, but we've left that already.
> > This is the opposite of science.
> It is a part of science, the hypothesis part. LAR applied to computing
> available per dollar in particular is a hypothesis formed in the mid 1960s.
> As far as I know, we are still more or less on that track, though they have
No, we're not. See benchmarks.
> had to cheat with multiple cores, which does make writing software that
Multiple core SMP doesn't scale. People who thought multithreading will scale
are going to get a nasty surprise. People who expect global coherent
caches scaling are going to get a nasty surprise. People who assume
global shared memories are a thing are going to get a nasty surprise.
People who hear about Amdahl's Law the first time have to stop worrying,
and embrace nondeterminism. People who expect reliable systems at hardware
level are gonna have a bad time.
> takes advantage of it much more difficult. For the next few years, we can
> safely believe that computers will continue to get cheaper. Maybe for the
> next fifty years, but who knows. For sure for the next 5 though. Intel has
> it all mapped out.
> > We've had a number of such people, which turned out a liability
> > to transhumanism in the end. Our handicap is already sufficiently
> > high, we don't need such monkeys on our back.
> How is being pessimistic about the future more helpful?
It obviously isn't. You have to be a realist. The problem with
optimists is that they think they're realists. But, unfortunately,
they aren't. When in a tough spot, never, ever team up
with an optimist.
More information about the extropy-chat