,----[ Ramanraj K ramanraj.k@gmail.com ] | Suppose I want my application to run 40 to 200 queries on a db, all | concurrently, to take a simple decision, brain like parallel | processing schemes/devices seems useful. The recent 64 bit | processors seem to come with capacities for very large RAM, etc, and | I am just curious if brain emulators are possible now. | "Supercomputers" seem to be more focused on increasing clock speeds, | but it may be worthwhile to also focus on increasing the number of | threads/processors we could use to the range of million millions | trading off with clock rate. | | I am not sure about technical feasibility etc., but this is one of | the "unsolved" areas of cs, that may be of interest to researchers | :) | | Ref: http://www.electricminds.org/ussclueless/essays/futurecs.htm `---- Supercomputer manufacturers are *not* focussed on increasing clock speed, instead they try to maximize FLOPs (Floating Point Operations) / second for a given budget.
If you look at Top500.org, they consider the following factors to grade a supercomputer. Rmax Maximal LINPACK performance achieved Rpeak Theoretical peak performance Nmax Problem size for achieving Rmax N1/2 Problem size for achieving half of Rmax
To calculate RPEAK use the following formula: Rpeak ~= Number of Processors * Clock Speed * FLOPS/Cycle (or Number of FPUs).
Thunder Example: 4096 x 1.4GHz x 4 ~= 23 TFlops. CDC6440 has 4FPUs
Rpeak/Rmax ~= Efficiency.
Top500.org's way of ranking a supercomputer is not adequate. They fail to consider some of the important factors like efficiency, price/performance ratio, manageability and TCO. Actual design of supercomputer is very much dependent on the application needs. Best way to benchmark is to use the real world applications and measure performance. For example applications that are embarrassingly parallel doesn't really need an expensive high performance low latency interconnect such as Infiniband. GigE will do fine.
On the other side, if you are taking about processor clock speed.. recently processor manufacturers have realized it makes more sense to maximize the number of cores in a processor than frequency. Dual core procs are already out there it the market. Quad and Octa are in development. Your argument is valid and Industry is certainly moving in that direction.
Speaking about supercomputers replacing human brain, we really don't know enough about human brain. We don't even have a good acceptable definition for Artificial Intelligence. Its a long way to go to for computers to achieve human like intelligence (30-50 years may be).
But with the capabilities of today's computing hardware, we can sure replace lot of routine human tasks smartly. Mostly in the field of automation. Software for AI is vastly behind. I very much agree with you on this subject. Software engineers are more concerned about parsing XML stuff or writing web applications, instead of solving the real problem of making machines intelligent and they perform the tasks themselves instead of we programming them every time.
The GNU project has entered the field of AI too. Look at http://directory.fsf.org/science/artintell/ http://www.tldp.org/HOWTO/AI-Alife-HOWTO.html