Warped
Newbie
- Joined
- Jan 25, 2009
- Messages
- 7,546
- Reaction score
- 0
Researchers create ultra-fast '1,000 core' processor, Intel also toys with the idea
http://www.engadget.com/2010/12/28/researchers-create-ultra-fast-1-000-core-processor-intel-also/We've already seen field programmable gate arrays (or FPGAs) used to create energy efficient supercomputers, but a team of researchers at the University of Glasgow led by Dr. Wim Vanderbauwhede now say that they have "effectively" created a 1,000 core processor based on the technology. To do that, the researchers divvied up the millions of transistors in the FPGA into 1,000 mini-circuits that are each able to process their own instructions -- which, while still a proof of concept, has already proven to be about twenty times faster than "modern computers" in some early tests. Interestingly, Intel has also been musing about the idea of a 1,000 core processor recently, with Timothy Mattson of the company's Microprocessor Technology Laboratory saying that such a processor is "feasible." He's referring to Intel's Single-chip Cloud Computer (or SCC, pictured here), which currently packs a whopping 48 cores, but could "theoretically" scale up to 1,000 cores. He does note, however, that there are a number of other complicating factors that could limit the number of cores that are actually useful -- namely, Amdahl's law (see below) -- but he says that Intel is "looking very hard at a range of applications that may indeed require that many cores."
http://en.wikipedia.org/wiki/Amdahl%27s_lawAmdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.
The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speed up is limited up to 20×, as the diagram illustrates.
So yeah still awesome stuff regardless of the limitations. I bet one day even 1,000 cores will look like nothing