[ExI] Watson On Jeopardy

Eugen Leitl eugen at leitl.org
Wed Feb 23 21:14:25 UTC 2011


On Wed, Feb 23, 2011 at 11:27:59AM -0700, Kelly Anderson wrote:

> > Not so many more years.
> 
> I understand that Intel thinks that they can stay on track until 2018
> using their current approach. Now their current approach seems to

11 nm node is scheduled for 2015. I'm not at all sure 11 nm
will be a cakewalk, and beyond that things become really
interesting (Si-Si is 0.233 nm, and of course CMOS no longer
works way before, arguably after 11 nm you're in quantum
electronics country, aka molecular circuitry without
chemistry).

> mostly be to just but more cores on one chip, which requires

More cores with shared memory don't scale. The only way to go
much beyond that is SCC/"Tera scale".

> intelligent compilers and/or programmers. Hopefully, more on the

Intelligent compilers don't work with shared-nothing 
asynchronous message passing over kilonodes. Humans are even 
more lousy at that, vide MPI debuggers.

> compiler side as that leverages better.

Doesn't work either, though you can emulate a shared memory
architecture. It will certainly force people to either stagnate,
or to learn new tricks. 
 
> What about fabbing slices, and "gluing" them together afterwards? Is

Stacking with TSV is not as good as true 3d integration, and it doesn't
offer anything like Moore's law. You're basically just doing a lot of
Si real estate, then thinning it, then stacking it. I don't think 
anything beyond 400 mm wafer size will happen, so without structure
shrink you're stuck with paying real thalers for actual silicon real estate.

> there anything in that direction? (I am not a fab expert by any means,
> I'm just asking)

Yes, the next step will be TSV-stacked DIMMs, and then memory
stacks on top of dies and then individual cores, and then WSI with
stacked memory on top. After that you'll have to go on the real
3D integration train -- assuming you can. 
 
> >> seen one solution to the heat problem that impressed the hell out of
> >
> > Cooling is only a part of the problem. There are many easy fixes which
> > are cumulative in regards to reducing heat dissipation.
> 
> The mechanism I saw was a water circulation mechanism driven off of
> the heat created in the chip itself. It was extremely cool (no pun

There are tricks to prevent power from being wasted in the first
place. Killing pin drivers, optical signalling, clockless designs, static
designs (MRAM/spintronics, memristors, and such), reversible logic.
Immersion cooling is definitely coming, and 60 deg C watercooling
is already happening.

> intended). Get the water out of the chip and you can cool the water
> using conventional means. Their pitch indicated that cooling was one
> of the biggest problems with going to 3D. There are probably many

The biggest problem with going to 3D is going to 3D. 

> more.
> 
> >> me, and no doubt there are more out there that I haven't seen. By the
> >> time they run out of gas on photo lithography, something, be it carbon
> >> nano tube based, or optical, or something else will come out. A
> >
> > Completely new technologies do not come out of the blue.
> > We're about to hit 11 nm http://en.wikipedia.org/wiki/11_nanometer
> > Still think Moore's got plenty of wind yet?
> 
> Not forever, but seemingly for a few more years.

I remember discrete transistor minis and punched cards/tape. That's 35+ years?
We certainly won't have another decade nevermind many of these. Not with
semiconductor photolithography. The question is whether there will be
a smooth takeover. Given that clock doubling has died about 6 years
ago, without too many noticing it might be well that we'll hit a 
discontinuity as Moore in CMOS will hit a wall, until a successor
technology comes online. Any progress in that lacune must happen by
way of architecture, which isn't out of question, but tougher.
 
> What about race track memory? They were saying that might be available
> by 2015 the last time I saw anything on it.

I have no idea. I consider everything vaporware until it ships (bubble
memory, MRAM).
Racetrack would be more like a flash killer, assuming it ever
happens.
 
> I am optimistic, and on these issues particularly I'm going off of the
> stuff Ray put in TSIN. If he's wrong, then I'm wrong. Knowing the cool
> stuff they see at MIT all the time, perhaps he is right.

I haven't read any Shortwhile in anger, but most of his predictions
are either trivial, or cherrypicked and/or outright bunk like
http://www.guardian.co.uk/environment/2011/feb/21/ray-kurzweill-climate-change
 
> > I am talking that people don't bother with algorithms, because "hardware
> > will be fast enough" or just build layers of layers upon external
> > dependencies, because "storage is cheap, hurr durr".
> 
> A lot of business software is exactly that way because it doesn't have
> to be great. Just good enough. In things like computer vision and AI

You can say awful. It's ok. We're among friends.

> where the computational requirements are above the current level of
> hardware performance, care is still taken to optimize.

Anyone who's tracking the field care to give a number of literature
reviews, preferably online, and not behind paywalls?
 
> > I'm sorry, I'm in the trade. Not seeing this progress thing
> > you mention.
> 
> I do.

This glass is half empty, dammit!

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list