[ExI] The Actual Visionary of the Future
kellycoinguy at gmail.com
Tue Oct 29 19:50:53 UTC 2013
On Tue, Oct 29, 2013 at 5:29 AM, Eugen Leitl <eugen at leitl.org> wrote:
> On Mon, Oct 28, 2013 at 01:51:22PM -0600, Kelly Anderson wrote:
> > I think his data is well researched. Whether all of the curves extend
> > the future, and just how far they will extend is guess work.
> No, no, no. If you're formulating a theory, you have to define
> its scope of applicability, and margins beyond which the
> theory is falsified.
This is not a theory like the theory of gravity or the theory of evolution.
It is a rule of thumb or a hypothesis by which you can approximate what
things will PROBABLY be like in the future. The future cannot be known with
certainty. For example, we all know of existential risks that would throw
Moore's Law out the window.
It's more like using climate projections to guess the weather on a specific
date than using orbital dynamics to determine the location of Jupiter on
July 4, 3200. Both are forecasts, but I know which one I'd rather depend
If I were planning a wedding months out, I would rather have the Farmer's
Almanac (based upon history on a specific date) than nothing. The LoAR is
more like the Farmer's Almanac than it is like planetary tables.
> If it's not a theory, then why are we wasting time on
> such assclownage?
Because even as a rule of thumb it is better than going completely in the
dark. If you were in a cave, would you not want a small candle as opposed
to nothing at all? I don't know if $1000 worth of computation will
approximate the power of the human brain in 2025, 2029, 2035 or 2040 but
the center of my guess is 2029 based on Moore's Law. That's slightly better
than saying, "I have no idea", isn't it?
> > > > believe everything is exponential. That being said, lots of things
> > > > like the savings in your bank account.
> > >
> > > Exponential growth of compound interest is a textbook case
> > > where your numerical model of physical layer processes and reality
> > > increasingly diverge, requiring periodic, painful readjustments.
> > >
> > I have never heard of a case where a bank simply refused to pay interest
> You have never heard of banks going broke, assets seized, currency
> hyperinflated? Really?
Of course I have. Those things can happen because of bad management, war,
bad government control over money and the like.
The point is that banks don't ever go out of business JUST because they
can't pay interest.
Monte dei Paschi di Siena was founded in 1472. If you put a dollar in
there, you would now have a LOT of money, and I doubt the bank would go out
of business if they had that money just sit there for hundreds of years.
Interestingly, this bank almost went out of business in 2008... Just shows
that the level of stupidity was nearly unprecedented.
> Are you honestly believing that money likes to work, and it
> keeps growing in the bank vaults, like early miners thought
> metal grew in the mountain, so that they left there some so
> that it could breed?
Yes, I believe money properly invested (say in the stock market) does like
to work. It does create value. I know this because I've personally
witnessed 8 million dollars grow into 32 million dollars over a period of
ten years. Without investment, that could never have happened.
> > because there was just too much money in the account. So what are you
> > referring to here?
> > > > > People
> > > > > forget that hard drives stopped doubling, at least for a short
> > > > >
> > > >
> > > > Because of a flood in Thailand. Nobody has said there wouldn't be
> > > in
> > >
> > > Thailand was not the reason.
> > >
> > > We're stuck at 4 TB because they ran into limits of a particular
> > > technology.
> > >
> > I'm baffled by your use of the word "stuck" here. We just got to 4 Tbytes
> You're getting far too frequently baffled for my liking. I'm showing
> you instances where reality deviates from the nice linear semi-log.
> There have been multiple smooth technology handovers in the platter
> areal density which however shew a different scaling
I think this goes more to proving my point than yours.
If you think you're seeing a linear semilog plot in there, then
> throw away your ruler. Or buy new glasses. If you're now agreeing
> that the growth is saturating, then why are you wasting my time?
The biggest downcurve in the plot is the part that projects into the
future. Let's come back in 5 years and see what reality actually happens.
> > not that long ago. We always get "stuck" by this definition. I have
> > attached my spreadsheet of hard drive prices that I have been maintaining
> The metric you're looking for is areal density.
The metric I care about is inflation adjusted dollars/byte. I could care
less about areal density, except that it is one (and only one) mechanism by
which dollars/byte goes down.
> > for a few years, but initially got elsewhere. I welcome comments.
> > > In case of platters full of spinning rust the snag is temporary,
> > > as there are two successor technologies about to enter the
> > > marketplace (HAMR and BPM, not new but close to becoming
> > > mature enough for practical applications) so there's probably
> > > another order of magnitude still to go before end of the line.
> > > That makes it 40 TB.
> > >
> > That hardly seems like "stuck" to me. Knowing how we're going to get the
> > next order of magnitude is good enough for me.
> No, deviations from linear semi-log plot are definitely not good enough
> for Ray, and I have to agree with him. If you like leaning out of the
> windows very far, prepare to deal with gravity.
I'm sure deviations from the semi-log plot are of concern to many people.
I'm more interested in the idea that things continue to improve at an
astonishing (perhaps near exponential) rate. I see no indication that that
kind of progress is going to end, even if it slows slightly. You're just
playing with the exponent a bit. That doesn't change the end result much.
> > Aside from that, there are things out there that promise to give the next
> I'm not interested in promises. I'm interested in past data that
> show where real world disagreed with prior predictions.
> > order of magnitude after that... such as:
> > https://en.wikipedia.org/wiki/Racetrack_memory
> > Which is clearly not ready for prime time, but is a good idea of the
> > of things that might happen when brilliant people are tasked with an
> Brilliant people can't make features smaller than atoms. Brilliant
> people have no magic wands to short-cut technology maturation times
> so that they spring full-formed from nothing, just as Athena sprang
> from Zeus' forehead. Brilliant people can't make a refinery grow
> out overnight, for free.
I agree that brilliant people will find it EXTREMELY difficult to build
features smaller than atoms, perhaps even impossible. But we aren't close
enough to the problem to say that we KNOW it is impossible yet. If it is
possible, it might involve something like a highly controlled neutron star.
And IF this is the case, that would explain the Fermi paradox. This is pure
conjecture. There are a LOT of things we can do before we need subatomic
manipulation. We aren't even close.
Things like refineries do take a long time to build using today's
techniques. I can see a day coming though where building such things will
not take as long as they do today. Humans can only move so fast, but robots
can move faster.
I refer you to:
If you could imagine robots programmed by sophisticated scheduling systems
to build a refinery, I propose that you could, in principle, build a
refinery rather quickly. The slowest part potentially is getting government
Here is my biggest point. If you can imagine being able to do something
given a reasonable amount of time and money with current technology, then I
can't imagine that given sufficient incentive that such a thing would not
be accomplished. There is sufficient incentive to improve computational
efficiencies, therefore, I cannot see such things not being accomplished.
> > objective.
> > Coincidentally, NOR flash has recently also entered
> > > scaling limits.
> > >
> > > The time for surface scaling is running out. The only
> > > alternative is 3d volume integration. We do not have anything
> > > in the pipeline to arrive in time, so there will be a gap.
> > > The only technology to interpolate would be Langmuir-Blodgett
> > > serial layer deposition, with according 2d liquid mosaic
> > > self-assembly/alignment. I'm not aware of this technology
> > > to be ready for deployment. Next after that is 3d crystal
> > > self-assembly from solution. This is even further away.
> > >
> > That's ok, we have time to get this stuff right before falling off the
> > curve.
> We've already fallen from the semiconductor litho curve.
Let's talk about CPS/Second/inflation adjusted Dollar. I am intensely NOT
interested in the details of how it happens. That is someone/everyone
else's job at the moment.
> See the NOR flash scaling at the URL I posted earlier.
> We've already fallen of the PV deployment curve (and we
> were never on the according infrastructure curve in the
> first place).
> You can't jump from zero TW to 20 TW in 20 years.
> Not unless you have MNT, and collectively we made sure we
> failed to develop that.
When you say MNT, do you mean molecular nanotechnology?
> > > > the road, just that there was an overall trend.
> > > >
> > > >
> > > > > People are unaware of finer points like
> > > > > http://postbiota.org/pipermail/tt/2013-October/014179.html
> > > >
> > > >
> > > > Ok, I read that, and what it said in a nut shell is "fuck this is
> > >
> > > Yes, this is the nature of limits. Instead of constant doubling
> > > times the last few show longer and longer steps. As I told you,
> > > we're no longer at 18 months but at 3 years doubling time this
> > > moment. The next doubling times will be longer. This means
> > > that linear semilog plot is no longer linear. No more Moore for you.
> > >
> > And yet, it is still doubling rapidly. The end result is the same, just
> Which part of "no more constant doubling times for you" you don't
As long as it continues to double, I don't care if it takes 18 months or 24
or 36. You can't point to a date in the future and say "improvement stops
here" can you? As long as it is doubling somewhat close to current levels,
the things I care about will continue to happen in the time scales I care
I do care if the whole damn boat is going down. But that is a separate
conversation. So long as there are SOME rich people/corporations/AGIs
paying for the development of this stuff, I think it will continue to be
> > a slightly different time scale. And there is no guarantee that we won't
> Which part of "you can't make widgets smaller than single atoms" you don't
What part of "There's plenty of room at the bottom" do you not understand.
We're not close to the atomic limits on most things.
> > make a hop with a new technology and get back on any given curve. It can
> > happen.
> A gold meteorite can fall in my garden. Hey, it could happen.
Touche. Getting back on a curve after having fallen off of it is really
difficult. It has happened occasionally, but it is rough.
> > >
> > > > Not, I expect it to come to a screeching halt.
> > >
> > > Why do you expect that? Look at the price tag of the
> > > zEnterprise 196. Obviously, a somewhat higher margin
> > > than on a 50 USD ARM SoC.
> > >
> > Sorry, you've lost me here. I don't know what these things are.
> It's a cheap mainframe, 75 kUSD entry level. Obviously, CPUs
> build from such can be made from unobtainium. But flash drives
> and mobile CPUs have low margins, so there's diminished incentive
> to go to the next node (especially if the next node has lower
> performance than current one).
Thank you for explaining that. I don't pay much attention to mainframes, or
even highly parallel computers or supercomputers in my work. I'm mostly
interested in PCs, tablets and cell phones, and perhaps Google glass.
Consumer related stuff. Yes, the cloud changes that, but I don't play in
that world much.
Given that, I'm still not understanding your point. Are these mainframes
getting more expensive as time goes on? Are we not on some kind of curve
with respect to them? My understanding is that rack based computing is
chugging along at an acceptable rate of growth. Am I missing something?
> > > > If they read your posts here Eugen, they might decide not to thaw you
> > > out.
> > > > Who needs a pessimist in a utopia... :-)
> > >
> > > Utopia? I'm afraid I have another piece of bad news for you.
> > > Very bad news, I'm afraid...
> > >
> > Anyone looking at 2013 from the time frame of 1913 would clearly call
> Anyone looking at an arbitrary time frame knows that darwinian
> evolution still applies.
But we are in the era of memetic evolution, not darwinian. And memes
replicate faster, and have a higher mutation rate.
> > utopia, at least from the technological standpoint. Also from the number
> > people operating under democracy, decreased violence and a number of
> > points. Not that it is utopia in every way.
> My thesis is that a postecosystem has a food web.
Ok, so here we may have a difference of opinion. I concede that the future
ecosystem will have an energy web, but not necessarily food based.
> > > > > > I know you MUST believe that computers will continue to get
> > > even
> > > > > if
> > > > > > they don't quite keep up the doubling pace. Right?
> > > > >
> > > > > What does faster mean? The only reliable way is a benchmark.
> > > > > The only really relevant benchmark is the one that runs your
> > > > > problem. As such faster never kept up with Moore's law.
> > > > > Moore's law is about affordable transistors, and that one
> > > > > gives you a canvas for your potential. So your gains are less
> > > > > than Moore, and sometimes a lot less. For me personally,
> > > > > the bottleneck in classical (GPGPU is classical) computers
> > > > > is worst-case memory bandwidth. Which is remarkably sucky,
> > > > > if you look it up.
> > > > >
> > > >
> > > > The problem I care about the most is computer vision. We are now
> > >
> > > Computer vision is very easy, actually, and quite well understood.
> > >
> > You are clearly stark raving MAD. There is no computer on earth that can
> > tell a cat from a dog reliably at this point.
> There is a significant difference between "I have no idea how to do that"
> from "ok, it's mere engineering at this point".
> We're understanding processing the retina sufficiently to produce
> code that the second processing pipeline can use. We have mapped
> features of later processing stages to the point that we know what
> you're looking at, or what you're dreaming of. We have off the
> shelf machine vision systems for many industrial tasks. We have
> autonomous cars that drive better than people.
I agree with all of this except the part of "we know what you're looking
at"... if that is state of the art, then I'm way behind.
> This is obviusly one of these cases where we've made some slight
> progress over last few decades.
The progress that has been made in computer vision MOSTLY comes from Moore,
not from better algorithms. Now, better computers enable algorithms that
are less efficient to be tried, and therefore one could argue that some new
algorithms have emerged from Moore. There has been progress in computer
vision to be sure. The progress is slow. It is largely based upon
computational improvements. Algorithmic improvement has occurred, but not
at the same rate.
> > > The low number of layers and connectivity (fanout), all local at
> > > that, a retina needs are within the envelope of silicon fabrication.
> > >
> > The retina is not what I'm talking about. I'm discussing image
> But the retina is what I'm talking about, because it's structurally
> and functionally simple enough so that Moravec picked it for his analysis.
> > understanding. "That is a picture of a dog in front of a house. The house
> > has a victorian architecture. The 1957 Cadillac next to the house would
> > indicate that the picture was most likely taken between 1956 and 1976."
> This is beyond machine vision. This is halfway to human-equivalent AI.
But this is what I mean by computer vision. Think if it as computer
graphics backwards. In graphics, you take a description and make a scene.
In computer vision, you take a scene and produce a description.
> > > > approaching automated vehicles becoming a reality. I thought it would
> > > > happen in 2014 since 2004. It may be delayed a year or two by
> > > > and lawyers, but the technology should be cheap enough for luxury
> cars to
> > >
> > > I'm afraid luxury something is going to be a very, very small market
> > > in the coming decades.
> > >
> > Stop. This is just irritating and unhelpful.
> You can bet this is fucking irritating, because we did it to ourselves!
> A pathetic failure of planning.
> > > I agree that autonomous cars are mostly a very good thing, unless
> > > you happen to be a trucker, or a car maker.
> > I'm not sure how autonomous cars are bad for car makers. I do get why
> The duty cycle of a personal car is terrible. With autonomous cars you
> only need a small fraction of total fleet. You already see the beginning
> of this with carsharing smartphone apps. Now you no longer need the hassle
> of owning a car just that you can use it. It's just great, unless you're
> in the car making business. It's pretty awful if car making is your
> country's main moneymaking business, and they're really EV and autonomous
Ok. That makes sense. Thank you for explaining it to me. (See, I am capable
of learning, and also agreeing with you.)
> > are bad for truckers.
> > > Whatever Germany
> > > earns on car making is about enough to pay for the fossil fuel
> > > imports.
> > >
> > You're confusing me again.
> Obviously, Germany has to figure out some other way to pay for their
> fossil fuel imports in the near future.
Ah. You are referring to Germany as a car manufacturing nation. I thought
you were referring to Germany as the solar energy capital of the universe.
Thus my confusion.
> > > > have highway cruise control (including steering) by 2014 or 2015. So
> > > > venture into guessing the future was pretty close, using Ray's
> > > >
You never commented on whether we would have autonomous cruise control.
> > >
> > I totally agree that the 2d processes we are currently using are running
> > into limits. But we will keep making the stuff we're making now cheaper.
> Older processes are cheaper than bleeding edge, but that curve saturates
> almost immediately. The only way to drop the costs is by using a new
It isn't the ONLY way, but it is important for sure. If every CPU company
stopped producing better chips today it would take a long time for a
competitor to arise who could do it better. Prices would stop dropping
quickly. So in that sense you are correct.
That being said, I think technological progress will continue to be made,
and that older technologies will drop in price.
The hard drive market is a good example that there is a bottom to this. You
can't buy a 5 MByte hard drive anymore. Nobody is making them (unless there
is some microscopic hard drive I'm unaware of).
> > my mind, that keeps us on Moore until such time as a 3D solution is
> We are already off-Moore.
In the transistors per square cm sense, we probably are. Though it's hard
to find data to support even that.
Not in the dollars per computation realm, unless I'm missing something
basic. Honestly, I can't find much data on whether we're on or off Moore.
It's frustrating not to know.
> The question is how long it will take until
> a different technology can pick up scaling, at least for a brief while
> (if you're at atomic limits in the surface, you're only a few doublings
> away from where your only option is to start doubling the volume).
I don't have trouble with doubling the volume.
> > out that makes things faster.
> > The main problem in my mind isn't making stuff smaller, but in
> > heat so you can stack it up close to each other. That's what I mean by
> Stacking is off-Moore.
The only Moore I care about is $/computation. Any other Moore is irrelevant
to me, and to Ray's predictions as well.
> One of the scaling limits is that the power scaling
> is no longer with us. As you'll notice, no novolatile memory is ready to
> pick up the torch of SRAM/DRAM/NOR flash, despite many decades of
I posted a paper about this, and yes, you are correct that there is nothing
yet close to hard drives.
Too bad you stopped prior to addressing the optimism adds years to your
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat