Home > slashdot > Scaling To a Million Cores and Beyond

Scaling To a Million Cores and Beyond

June 29th, 2010 06:13 admin Leave a comment Go to comments

mattaw writes “In my blog post I describe a system designed to test a route to the potential future of computing. What do we do when we have computers with 1 million cores? What about a billion? How about 100 billion? None of our current programming models or computer architecture models apply to machines of this complexity (and with their corresponding component failure rate and other scaling issues). The current model of coherent memory/identical time/everything can route to everywhere just can’t scale to machines of this size. So the scientists at the University of Manchester (including Steve Furber, one of the ARM founders) and the University of Southampton turned to the brain for a new model. Our brains just don’t work like any computers we currently make. Our brains have a lot more than 1 million processing elements (more like the 100 billion), all of which don’t have any precise idea of time (vague ordering of events maybe) nor a shared memory; and not everything routes to everything else. But anyone who argues the brain isn’t a pretty spiffy processing system ends up looking pretty silly. In effect, modern computing bears as much relation to biological computing as the ordered world of sudoku does to the statistical chaos of quantum mechanics.

Source: Scaling To a Million Cores and Beyond

Related Articles:

  1. ARM Readies Cores For 64-Bit Computing
  2. Linux May Need a Rewrite Beyond 48 Cores
  3. Scaling, Scaling, Scaled: textPlus Turns Two, Hits 10 Billion Messages Sent Milestone
  4. Hidden Cores On Phenom CPUs Can Be Unlocked
  5. 4 Cores? 6 Cores? Do You Care?
blog comments powered by Disqus