Are We Ready For This?

Warped

Newbie
Joined
Jan 25, 2009
Messages
7,546
Reaction score
0
1000-core-processor-12-28-2010-1293557404.jpg


Researchers create ultra-fast '1,000 core' processor, Intel also toys with the idea

We've already seen field programmable gate arrays (or FPGAs) used to create energy efficient supercomputers, but a team of researchers at the University of Glasgow led by Dr. Wim Vanderbauwhede now say that they have "effectively" created a 1,000 core processor based on the technology. To do that, the researchers divvied up the millions of transistors in the FPGA into 1,000 mini-circuits that are each able to process their own instructions -- which, while still a proof of concept, has already proven to be about twenty times faster than "modern computers" in some early tests. Interestingly, Intel has also been musing about the idea of a 1,000 core processor recently, with Timothy Mattson of the company's Microprocessor Technology Laboratory saying that such a processor is "feasible." He's referring to Intel's Single-chip Cloud Computer (or SCC, pictured here), which currently packs a whopping 48 cores, but could "theoretically" scale up to 1,000 cores. He does note, however, that there are a number of other complicating factors that could limit the number of cores that are actually useful -- namely, Amdahl's law (see below) -- but he says that Intel is "looking very hard at a range of applications that may indeed require that many cores."
http://www.engadget.com/2010/12/28/researchers-create-ultra-fast-1-000-core-processor-intel-also/

Amdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speed up is limited up to 20×, as the diagram illustrates.
http://en.wikipedia.org/wiki/Amdahl%27s_law

So yeah still awesome stuff regardless of the limitations. I bet one day even 1,000 cores will look like nothing
 
So is a core another processor on the same slab of silicon. Does this have the same number of transisitors as an equivelent single core chip.
 
Who needs winter home heating when you have one of these?
 
I know hardly anything about computers, but wouldn't an asymmetrical core processor, say, 1/4 of the processor (250 cores) being replaced with a single much more powerful core (that would still have much less total processing power than the 250 when using parallel code, but much more effective for a single thread) provide an easy work-around for Amdahl's Law?
 
I thought Metro2033 was the benchmark game now.
 
You will be limited by the amount of gates on the FPGA. Currently I think the max gates in a production FPGA is around 4 million. I don't know how many gates you need to emulate a processor but I'm sure it's a crap load. So if you had 1000 processors on a single 4M FPGA they would be very slow processors. They could probably only handle a few instructions at a time.
 
Back
Top