[ExI] The Actual Visionary of the Future
eugen at leitl.org
Mon Oct 28 10:37:12 UTC 2013
On Sun, Oct 27, 2013 at 01:50:23PM -0600, Kelly Anderson wrote:
> That's simply not true Eugen. You're better than that.
That was obviously hyperbole, to make a point. He is, however,
prone to see exponentials where there are none.
> I believe MORE things are exponential than Ray does, and even I don't
> believe everything is exponential. That being said, lots of things are,
> like the savings in your bank account.
Exponential growth of compound interest is a textbook case
where your numerical model of physical layer processes and reality
increasingly diverge, requiring periodic, painful readjustments.
> > People
> > forget that hard drives stopped doubling, at least for a short while.
> Because of a flood in Thailand. Nobody has said there wouldn't be bumps in
Thailand was not the reason.
We're stuck at 4 TB because they ran into limits of a particular technology.
In case of platters full of spinning rust the snag is temporary,
as there are two successor technologies about to enter the
marketplace (HAMR and BPM, not new but close to becoming
mature enough for practical applications) so there's probably
another order of magnitude still to go before end of the line.
That makes it 40 TB.
Coincidentally, NOR flash has recently also entered
The time for surface scaling is running out. The only
alternative is 3d volume integration. We do not have anything
in the pipeline to arrive in time, so there will be a gap.
The only technology to interpolate would be Langmuir-Blodgett
serial layer deposition, with according 2d liquid mosaic
self-assembly/alignment. I'm not aware of this technology
to be ready for deployment. Next after that is 3d crystal
self-assembly from solution. This is even further away.
> the road, just that there was an overall trend.
> > People are unaware of finer points like
> > http://postbiota.org/pipermail/tt/2013-October/014179.html
> Ok, I read that, and what it said in a nut shell is "fuck this is hard".
Yes, this is the nature of limits. Instead of constant doubling
times the last few show longer and longer steps. As I told you,
we're no longer at 18 months but at 3 years doubling time this
moment. The next doubling times will be longer. This means
that linear semilog plot is no longer linear. No more Moore for you.
> Not, I expect it to come to a screeching halt.
Why do you expect that? Look at the price tag of the
zEnterprise 196. Obviously, a somewhat higher margin
than on a 50 USD ARM SoC.
> > > that point puts computers at many billions of times smarter than us in a
> > Many billions times smarter than us, using which metric?
> Any you wish to put forward.
I cheerfully admit that I have absolutely no idea.
Ants can possibly design ways to evaluate people which
make sense to people.
> > relatively short time frame compared to the ultimate limits.
> > My personal time is very short, probably less than 40 years if
> > I'm lucky. My personal interests lie squarely with a working
> > suspend function, the future better get the resume part
> > right.
> If they read your posts here Eugen, they might decide not to thaw you out.
> Who needs a pessimist in a utopia... :-)
Utopia? I'm afraid I have another piece of bad news for you.
Very bad news, I'm afraid...
> > > I know you MUST believe that computers will continue to get faster, even
> > if
> > > they don't quite keep up the doubling pace. Right?
> > What does faster mean? The only reliable way is a benchmark.
> > The only really relevant benchmark is the one that runs your
> > problem. As such faster never kept up with Moore's law.
> > Moore's law is about affordable transistors, and that one
> > gives you a canvas for your potential. So your gains are less
> > than Moore, and sometimes a lot less. For me personally,
> > the bottleneck in classical (GPGPU is classical) computers
> > is worst-case memory bandwidth. Which is remarkably sucky,
> > if you look it up.
> The problem I care about the most is computer vision. We are now
Computer vision is very easy, actually, and quite well understood.
The low number of layers and connectivity (fanout), all local at
that, a retina needs are within the envelope of silicon fabrication.
> approaching automated vehicles becoming a reality. I thought it would
> happen in 2014 since 2004. It may be delayed a year or two by bureaucrats
> and lawyers, but the technology should be cheap enough for luxury cars to
I'm afraid luxury something is going to be a very, very small market
in the coming decades.
I agree that autonomous cars are mostly a very good thing, unless
you happen to be a trucker, or a car maker. Whatever Germany
earns on car making is about enough to pay for the fossil fuel
> have highway cruise control (including steering) by 2014 or 2015. So my
> venture into guessing the future was pretty close, using Ray's technique.
> > Ok. So, now your transistor budget no long doubles in
> > constant time, but that time keeps increasing. It's roughly
> > three years by end of this year, no longer 18 months.
> > Physical feature size limits are close behind, and your
> > Si real state is a 400 mm pizza, max. WSI gives you a
> > factor of two by making yield quantitative, but it wrecks
> > havoc to your computation model, because grain size starts
> > being tiny (less than mm^2), and asks for asynchronous
> > shared-nothing, and did I mention fine-grained? So no
> > TBytes of RAM for your LUT. The next step is FPGA, as in
> > runtime reconfigurable. That *might* give you another
> > factor of 2, or maybe even 4. Stacking is off-Moore, but
> > it will do something, particularly giving cache-like
> > access to your RAM, as long as it's few 10 MBytes max.
> I've predicted that they will go to 3D. It is the only logical way to go
Everybody and his dog predicted that, since early 1970s.
The difficult is actually making it happen, just in time
when semiconductor photolitho just runs out of steam.
Guess what, that time is now. So, where is your 3d integration
> from here, other than maybe 2 1/2 D first...
You can't have that by semiconductor photolitho. Stacking is
off-More. What else have you got?
> > And then you have to go real 3d, or else there's gap.
> True, unless something completely different comes along, which may not be
> highly likely.
New technologies typically take decades of development, until
they're sufficiently matured so that they can take on mature
technologies that have ran into their scaling limits.
> > My guess the gap is somewhere 15-20 years long, but
> > we've got maybe 10 years until saturation curve is pretty
> > damn flat.
> Ok. Then we can start making larger structures. It won't speed up due to
> decreasing transistor size, but it will be able to do useful work. Imagine
> a 3d CPU 5 inches on a side. That could do some serious work. More than a
> human brain.
The human brain is a 3d integrated assembly of computational
elements which are built from features on nm scale.
> > He implicitly implied we'll run on 100% of thin-film PV in 16 years.
> > That was 2011, so make that 14 years. This means 4.2 TWp/year just
> > for power in a linear model, nevermind matching synfuel capability
> > (try doubling that, after all is sung and done -- 8 shiny TWp/year).
> > We're not getting the linear model. In fact, we arguably sublinear,
> > see
> > http://cleantechnica.com/2013/10/14/third-quarter-2013-solar-pv-installations-reach-9-gw/
> You obviously don't understand the nature of his prediction. If he says
Obviously. I expect prediction to be brittle, and that the originator
is prepared to eat some crow, in case she is wrong. I'm old-fashioned
> that the doubling in solar efficiency is 3.5 years (going from memory) then
> half of the solar he envisions will be installed between July 2023 and
Thank you for explaining exponential growth to me. I think I've first
understood it before I was 10. The nature of solar cells is the only
way to double the output is to double the surface. And the according
infrastructure in the background, simple things like 10 GUSD plants,
electric grid upgrades, storage systems, and the like.
> 2027. What's being installed now probably is sublinear, that's what an
> exponential would predict. He didn't predict a linear model. We'll revisit
You're not understanding me. It used to be exponential. Because it's
very easy to double very little. Until suddenly you have to double
quite a lot. This isn't a lily pond or a bacterial culture, this is
So Ray is already wrong, once again. The trend is no longer exponential.
> his prediction in 2027 if we're both still communicating by then.
The prediction is 100% of electricity in 16 years. He then scaled
that back by saying 20 years. That's 2021.
Given that we're already off-exponential, I expect that you keep
posting "I'm wrong" every year.
> > apply. However, in a sense it does apply. We do get some percent better at
> > > extracting what's left each year. That doesn't mean we get an exponential
> > No, in terms of net energy we're not getting better. We're actually
> > getting worse.
> > > amount of oil, since there's a limited amount of the stuff. But it does
> > > mean that we get exponentially better at finding what's left (note that
> > We're not getting better. We've mapped all the stuff, there are almost
> > no unknown unknowns. And dropping EROEI and even dropping volume
> > (not net energy, volume!) per unit of effort is pretty much the
> > opposite of exponential. Do 40% of decay rate/well/year mean a
> > thing to you?
> You misunderstand my point again. I know it's harder to get oil. But we
> develop new technologies for getting at what's left.
Fracking is 40 years old. Fracking is running into diminishing returns.
So where are your new technologies, which need to be already in wide
> > > this curve is likely much more gentle than computing, with a doubling of
> > > reserves we can get at maybe every 20 or 50 years. I don't know.)
> > There are no exponentials in infrastructure. There is an early
> > sigmoidal that looks that way, but we've left that already.
> Infrastructure can change rapidly. How long did it take for everyone to get
No. Infrastructure takes 30 years, frequently longer. That's a constant.
> a cell phone? Smart Phones? When electric cars make financial sense (if
How long did take for everybody to get their own synfuel plant?
> they ever do) then people will switch to them quickly. Large infrastructure
Do you understand the logictics of car production? Battery manufacturing?
Dynamics of fleet exchange? Recharging infrastructure? Including the money
to fund it all? Do the math, it is really quite illuminating.
> like roads and so forth will remain problematic until robotics is good
> enough to do much more of the job.
> > >
> > > > This is the opposite of science.
> > > >
> > >
> > > It is a part of science, the hypothesis part. LAR applied to computing
> > > available per dollar in particular is a hypothesis formed in the mid
> > 1960s.
> > > As far as I know, we are still more or less on that track, though they
> > have
> > No, we're not. See benchmarks.
> Data please. I can't find any. I have looked.
Try Stream, though it's a synthetic benchmark
It would be a reasonable assumption for retina-like processing
scaling. Deeper visual pipelines are different. Here, you need
to access something like fetching from a large (>>10 GByte) of
> > People who hear about Amdahl's Law the first time have to stop worrying,
> > and embrace nondeterminism. People who expect reliable systems at hardware
> > level are gonna have a bad time.
> I disagree with that. There will be reliable hardware, or they won't be
If you want to not run into Amdahl you need to embrace nondeterminism.
Building test harnesses just got a bit harder.
> able to sell it. No matter how slow the previous generation was. It is hard
Yes, there will be unreliable hardware. This is one of the problems in
exascale: unreliable transport and unreliable components (as in: parts
of your system keep failing at runtime, and you diagnose and remap to
hot spares, all without breaking a stride). Beyond that, you've got
stochastical computing elements. That's one of the joys of living at
> enough to get programmers to do multi-threading. It would be damn close to
> impossible to get them to switch to a model where the answer might not be
There is no longer "exactly right" there is only "good enough".
> > > takes advantage of it much more difficult. For the next few years, we can
> > > safely believe that computers will continue to get cheaper. Maybe for the
> > > next fifty years, but who knows. For sure for the next 5 though. Intel
> > has
> > > it all mapped out.
> > >
> > >
> > > > We've had a number of such people, which turned out a liability
> > > > to transhumanism in the end. Our handicap is already sufficiently
> > > > high, we don't need such monkeys on our back.
> > > >
> > >
> > > How is being pessimistic about the future more helpful?
> > It obviously isn't. You have to be a realist. The problem with
> > optimists is that they think they're realists. But, unfortunately,
> > they aren't. When in a tough spot, never, ever team up
> > with an optimist.
> A pessimist will just hole up in his cave. I refer you to "The Croods" to
I don't know what a pessimist would do. I do know that the only guy who'd
still have water when his cars break down in the desert is a realist.
The optimists always end up as bleached bones. Your call.
> see how that worked out in one fictional setting.
More information about the extropy-chat