[ExI] ibm takes on the commies
Samantha Atkins
sjatkins at mac.com
Wed Feb 16 23:53:45 UTC 2011
On 02/15/2011 11:52 PM, Eugen Leitl wrote:
> On Tue, Feb 15, 2011 at 11:14:11PM -0800, spike wrote:
>>
>>
>> Computer hipsters explain this to me. When they are claiming 10 petaflops,
>> they mean using a few tens of thousands of parallel processors, ja? We
> A common gamer's graphics card can easily have a thousand or a couple
> thousand cores (mostly VLIW) and memory bandwidth from hell. Total node
> count could run into tens to hundreds thousands, so we're talking
> multiple megacores.
As you are probably aware those are not general purpose cores. They
cannot run arbitrary algorithms efficiently.
>> couldn't check one Mersenne prime per second with it or anything, ja? It
>> would be the equivalent of 10 petaflops assuming we have a process that is
>> compatible with massive parallelism? The article doesn't say how many
> Fortunately, every physical process (including cognition) is compatible
> with massive parallelism. Just parcel the problem over a 3d lattice/torus,
> exchange information where adjacent volumes interface through the high-speed
> interconnect.
There is no general parallelization strategy. If there was then taking
advantage of multiple cores maximally would be a solved problem. It is
anything but.
> Anyone who has written numerics for MPI recognizes the basic design
> pattern.
>
Not everything is reducible in ways that lead to those techniques being
generally sufficient.
- s
More information about the extropy-chat
mailing list