[ExI] The Actual Visionary of the Future
eugen at leitl.org
Tue Oct 29 21:02:18 UTC 2013
On Tue, Oct 29, 2013 at 01:50:53PM -0600, Kelly Anderson wrote:
> > If it's not a theory, then why are we wasting time on
> > such assclownage?
> Because even as a rule of thumb it is better than going completely in the
> dark. If you were in a cave, would you not want a small candle as opposed
> to nothing at all? I don't know if $1000 worth of computation will
> approximate the power of the human brain in 2025, 2029, 2035 or 2040 but
> the center of my guess is 2029 based on Moore's Law. That's slightly better
There are two problems with this statement. First, the brains don't
run LINPACK. So you don't know, you can only guess. Secondly, you
assume that Moore still continues until 2040, while we have data
that this isn't true even in 2013.
> than saying, "I have no idea", isn't it?
Where it matters I prefer to be bounded by pessimums rather than optimums.
Because optimism kills, that's why.
> > You have never heard of banks going broke, assets seized, currency
> > hyperinflated? Really?
> Of course I have. Those things can happen because of bad management, war,
> bad government control over money and the like.
Systemic crises happen due to disconnect when your resource tracking
model has a cumulative bias. There is a very good reason why usury
always had a bad rap.
> > Are you honestly believing that money likes to work, and it
> > keeps growing in the bank vaults, like early miners thought
> > metal grew in the mountain, so that they left there some so
> > that it could breed?
> Yes, I believe money properly invested (say in the stock market) does like
> to work. It does create value. I know this because I've personally
Most of stock market is very much like Monte Carlo.
> witnessed 8 million dollars grow into 32 million dollars over a period of
> ten years. Without investment, that could never have happened.
Fly, you fools!
> > > >
> > >
> > > I'm baffled by your use of the word "stuck" here. We just got to 4 Tbytes
> > You're getting far too frequently baffled for my liking. I'm showing
> > you instances where reality deviates from the nice linear semi-log.
> > There have been multiple smooth technology handovers in the platter
> > areal density which however shew a different scaling
> > http://www.hindawi.com/journals/at/2013/521086/fig1/
> I think this goes more to proving my point than yours.
None of that curve reminds you of nongreen in
> If you think you're seeing a linear semilog plot in there, then
> > throw away your ruler. Or buy new glasses. If you're now agreeing
> > that the growth is saturating, then why are you wasting my time?
> The biggest downcurve in the plot is the part that projects into the
> future. Let's come back in 5 years and see what reality actually happens.
Yes, let's come back in 5 years and see how these 2 year doublings
are fairing, and what the doubling time of Moore is by then.
> > > not that long ago. We always get "stuck" by this definition. I have
> > > attached my spreadsheet of hard drive prices that I have been maintaining
> > The metric you're looking for is areal density.
> The metric I care about is inflation adjusted dollars/byte. I could care
> less about areal density, except that it is one (and only one) mechanism by
You don't see Thailand in areal density. You do see it in price.
There are no affordable 4-platter drives, so areal density is really
the only useful metric.
> which dollars/byte goes down.
> I agree that brilliant people will find it EXTREMELY difficult to build
> features smaller than atoms, perhaps even impossible. But we aren't close
> enough to the problem to say that we KNOW it is impossible yet. If it is
It's useful to know that the Si-Si bond length is 0.235 nm. Of course
you can't make widgets from just Si, so let's put the critical size down
to 1 nm. Intel is currently at 14 nm, and already has issues with
yield. So we're definitely close enough.
> possible, it might involve something like a highly controlled neutron star.
> And IF this is the case, that would explain the Fermi paradox. This is pure
> conjecture. There are a LOT of things we can do before we need subatomic
> manipulation. We aren't even close.
> Things like refineries do take a long time to build using today's
> techniques. I can see a day coming though where building such things will
I can also see a day coming, but the day needs to be yesteryear, because
Ray's exponential photovoltaics doesn't produce liquid fuels. Nor
glass or aluminium, nor electricians, nor grid upgrades.
> not take as long as they do today. Humans can only move so fast, but robots
> can move faster.
> I refer you to:
> If you could imagine robots programmed by sophisticated scheduling systems
> to build a refinery, I propose that you could, in principle, build a
> refinery rather quickly. The slowest part potentially is getting government
> Here is my biggest point. If you can imagine being able to do something
> given a reasonable amount of time and money with current technology, then I
> can't imagine that given sufficient incentive that such a thing would not
> be accomplished. There is sufficient incentive to improve computational
> efficiencies, therefore, I cannot see such things not being accomplished.
I can see such things not being accomplished. We suffer a death of thousand
papercuts, and then we use up whatever plutonium is around to make a few
strong points in trinitite.
> > We've already fallen from the semiconductor litho curve.
> Let's talk about CPS/Second/inflation adjusted Dollar. I am intensely NOT
What is CPS? Characters Per Second?
> interested in the details of how it happens. That is someone/everyone
> else's job at the moment.
> > See the NOR flash scaling at the URL I posted earlier.
> > We've already fallen of the PV deployment curve (and we
> > were never on the according infrastructure curve in the
> > first place).
> > You can't jump from zero TW to 20 TW in 20 years.
> > Not unless you have MNT, and collectively we made sure we
> > failed to develop that.
> When you say MNT, do you mean molecular nanotechnology?
> > Which part of "no more constant doubling times for you" you don't
> > understand?
> As long as it continues to double, I don't care if it takes 18 months or 24
What is you doubling times double, too? Remember, we're already at 36 months,
not 18. How do 6, 12, 24, 48 years sound like?
> or 36. You can't point to a date in the future and say "improvement stops
I can show you an asymptote, and no futurist likes asymptotes.
> here" can you? As long as it is doubling somewhat close to current levels,
> the things I care about will continue to happen in the time scales I care
> I do care if the whole damn boat is going down. But that is a separate
> conversation. So long as there are SOME rich people/corporations/AGIs
> paying for the development of this stuff, I think it will continue to be
The reason Moore is off-track is because there's not enough money in
the world people are willing to throw at a problem. With each further
step the difficulty rises, so do the amounts of money.
> > > a slightly different time scale. And there is no guarantee that we won't
> > Which part of "you can't make widgets smaller than single atoms" you don't
> > understand?
> What part of "There's plenty of room at the bottom" do you not understand.
> We're not close to the atomic limits on most things.
We're in touching distance to atomic/quantum limits in CMOS semilitho things.
I don't care about anything else if we're talking Moore.
> > > Sorry, you've lost me here. I don't know what these things are.
> > It's a cheap mainframe, 75 kUSD entry level. Obviously, CPUs
> > build from such can be made from unobtainium. But flash drives
> > and mobile CPUs have low margins, so there's diminished incentive
> > to go to the next node (especially if the next node has lower
> > performance than current one).
> Thank you for explaining that. I don't pay much attention to mainframes, or
Mainframes are money-makers. You can make things there you can't do
elsewhere, but there's the price to pay for it.
> even highly parallel computers or supercomputers in my work. I'm mostly
Supercomputers are highly parallel computers.
> interested in PCs, tablets and cell phones, and perhaps Google glass.
Supercomputers are pretty much like tablets and cell phones. PCs
> Consumer related stuff. Yes, the cloud changes that, but I don't play in
> that world much.
Cloud is a bit like cellphones, but without the floats. Same core
issue, though: power.
> Given that, I'm still not understanding your point. Are these mainframes
> getting more expensive as time goes on? Are we not on some kind of curve
They stay roughly the same.
> with respect to them? My understanding is that rack based computing is
> chugging along at an acceptable rate of growth. Am I missing something?
x86 doesn't yet know it's dead yet. Developers don't yet know hardware
is going places they don't understand (some haven't yet figured out
that clocks stopped doubling yet).
> > Anyone looking at an arbitrary time frame knows that darwinian
> > evolution still applies.
> But we are in the era of memetic evolution, not darwinian. And memes
> replicate faster, and have a higher mutation rate.
Darwin never missed a beat. The fitness function change, but we're
still imperfect replicators in a limited resource context. The future
is exactly like that, only far more so. Darwin stuck on fast-forward
is not a happy fun place. Do not taunt the happy-fun Darwin ball.
> > My thesis is that a postecosystem has a food web.
> Ok, so here we may have a difference of opinion. I concede that the future
> ecosystem will have an energy web, but not necessarily food based.
Locally, atoms and Joules are limited, and there is still competition
and replication. Which is the main reason why nobody can ignore the
> > We're understanding processing the retina sufficiently to produce
> > code that the second processing pipeline can use. We have mapped
> > features of later processing stages to the point that we know what
> > you're looking at, or what you're dreaming of. We have off the
> > shelf machine vision systems for many industrial tasks. We have
> > autonomous cars that drive better than people.
> I agree with all of this except the part of "we know what you're looking
> at"... if that is state of the art, then I'm way behind.
I'm talking instrumented behaving subjects. Neuroscience is making
remarkable advances, driven by instrumentation and computers.
> > This is obviusly one of these cases where we've made some slight
> > progress over last few decades.
> The progress that has been made in computer vision MOSTLY comes from Moore,
I agree, we're hardware-limited. Which is why the early failure of Moore is so dismal.
> not from better algorithms. Now, better computers enable algorithms that
> are less efficient to be tried, and therefore one could argue that some new
> algorithms have emerged from Moore. There has been progress in computer
> vision to be sure. The progress is slow. It is largely based upon
> computational improvements. Algorithmic improvement has occurred, but not
> at the same rate.
> > Obviously, Germany has to figure out some other way to pay for their
> > fossil fuel imports in the near future.
> Ah. You are referring to Germany as a car manufacturing nation. I thought
> you were referring to Germany as the solar energy capital of the universe.
Germany is a good demonstration why solar is anti-Moore. It's very easy
to double very little, but just as in the grains of rice on a chessboard
it suddenly turns sigmoid once you're in serious cash flow country.
> Thus my confusion.
> You never commented on whether we would have autonomous cruise control.
We've been having autonomous cruise control for a while, the interesting
part is where it's cheap enough for conventional cars, and the insurance
issue is addressed. This can be rather soon, I'm not going to make a
prediction because I expect significant disruption ahead, scrambling
pretty every growth prognosis.
> > We are already off-Moore.
> In the transistors per square cm sense, we probably are. Though it's hard
> to find data to support even that.
Just as peak oil, such data is only visible a bit after the fact.
Some have called it as early as 2011, we'll see it soon enough.
> Not in the dollars per computation realm, unless I'm missing something
> basic. Honestly, I can't find much data on whether we're on or off Moore.
> It's frustrating not to know.
It is, most search engines are getting increasingly useless for
> > The question is how long it will take until
> > a different technology can pick up scaling, at least for a brief while
> > (if you're at atomic limits in the surface, you're only a few doublings
> > away from where your only option is to start doubling the volume).
> I don't have trouble with doubling the volume.
Remember the grains of rice on a chessboard thing. Somebody has to pay
for these and move these. In the beginning, it is very easy.
More information about the extropy-chat