On 8/5/05, Anand Babu ab@gnu.org.in wrote:
,----[ Ramanraj K ramanraj.k@gmail.com ] | Suppose I want my application to run 40 to 200 queries on a db, all | concurrently, to take a simple decision, brain like parallel | processing schemes/devices seem useful. The recent 64 bit | processors seem to come with capacities for very large RAM, etc, and | I am just curious if brain emulators are possible now. | "Supercomputers" seem to be more focused on increasing clock speeds, | but it may be worthwhile to also focus on increasing the number of | threads/processors we could use to the range of million millions | trading off with clock rate. | | I am not sure about technical feasibility etc., but this is one of | the "unsolved" areas of cs, that may be of interest to researchers | :) | | Ref: http://www.electricminds.org/ussclueless/essays/futurecs.htm `---- Supercomputer manufacturers are *not* focussed on increasing clock speed, instead they try to maximize FLOPs (Floating Point Operations) / second for a given budget.
If you look at Top500.org, they consider the following factors to grade a supercomputer. Rmax Maximal LINPACK performance achieved Rpeak Theoretical peak performance Nmax Problem size for achieving Rmax N1/2 Problem size for achieving half of Rmax
To calculate RPEAK use the following formula: Rpeak ~= Number of Processors * Clock Speed * FLOPS/Cycle (or Number of FPUs).
Thunder Example: 4096 x 1.4GHz x 4 ~= 23 TFlops. CDC6440 has 4FPUs
Rpeak/Rmax ~= Efficiency.
Top500.org's way of ranking a supercomputer is not adequate. They fail to consider some of the important factors like efficiency, price/performance ratio, manageability and TCO. Actual design of supercomputer is very much dependent on the application needs. Best way to benchmark is to use the real world applications and measure performance. For example applications that are embarrassingly parallel doesn't really need an expensive high performance low latency interconnect such as Infiniband. GigE will do fine.
I am surprised that Top500 ranking is solely about ability of machines to do floating-point calculations based on the Linpack benchmark available at http://www.netlib.org/linpack/ More fundamentally, why is that we still have not moved to fixed-point representations, inspite of great improvements in technology? It is a little unsettling for me, as I need to use rational numbers, and I need fixed-point representations to be assured that the scheme works.
Dynamism in Law is very well known, and an accepted fact of life. Change is the law of nature and everything, except of course the law of change itself, is subject to change. Law keeps pace with various new developments, and therefore the Law is in a constant state of flux, and Legal rules have to be stored and executed .
Gaming rules and legal rules have many similarities, from computing point of view, but there is a very important and significant difference: The rules of chess would not change atleast while a game is in progress, but life with law is dynamic. Imagine you have to write a program in such a way that the rules of chess could change while a game is in progress. The Game of Life is governed by the Law, and the Law of Dynamism requires us to devise schemes
Assuming that a set of rules could run in a linearly we could sequentially number them and code as follows: 1. Do this 2. Do that 3. Do foo
Now, if a new rule has to be introduced between 1 and 2, the easiest way to do it is to simply introduce it between 1 and 2 and assign the new rule a rational number with a simple division (1 + 2) / 2 = 1.5 This may be done ad infinitum times, and we can elegantly handle the dynamic nature of rules. I can sort the rules as they are sequentially stored, and have good control over the rules. In this scheme, it makes no sense to store 1.5 with floating point representation and it needs a fixed point representation. This is necessary to avoid duplicate rows, rules, ambiguity, etc
Apart from the need above, it is fairly important to have good representations for numbers on computers if we desire to make any serious progress. Since the processors themselves are using FPU's, computer applications seem to reflect the same. Probably, there were design limits 20 years ago, but I wonder if they still exist. We should have Rational Number Units that deal with fixed point representaions, and return inf if sys limit is reached.
On the other side, if you are taking about processor clock speed.. recently processor manufacturers have realized it makes more sense to maximize the number of cores in a processor than frequency. Dual core procs are already out there it the market. Quad and Octa are in development. Your argument is valid and Industry is certainly moving in that direction.
Speaking about supercomputers replacing human brain, we really don't know enough about human brain.
Daksinamurti Stotra begins plainly with a simple truth: ".. the world is within oneself even as a city reflected in a mirror is, but projected as if it is outside, by maya, as in dream". I did not understand this clearly at once, but when I latter saw the truth there, I was shocked. "The Matrix" has capitalized on this :) An intro to the movie is avaliable at http://awesomehouse.com/matrix/intro.html
Our ancients, having realised several truths, have well analysed and proposed several theories testing and proving them through practice: All that wealth is very much available and as the physiology of the brain becomes more transparent to scientists, the theories would get confirmed.
We don't even have a good acceptable definition for Artificial Intelligence. Its a long way to go to for computers to achieve human like intelligence (30-50 years may be).
"Arivu" in Tamil stems from the root word "Ari" meaning "to know". Knowledge is through the sense organs, and they are the seats of intelligence. Nanool considers even trees to be endowed with intelligence based on its sense of touch Higher forms have more sensory organs, but all of them are based on "touch". Our brain is loosely coordinating input from and issuing output commands with help from a sixtth sense that we call by various names. Basic goals for us include assuring ourselves of good food, clothing, shelter etc. AI for computer would essentially mean what goals *we* set for *it*. AFAIK, the legal rules are the best source of what is widely accepted as "common sense", atleast it provides a very strong basic foundation for building better systems. What I currently see is only attempts to build AI systems without the basic foundation, and therefore, they seem to fall to the ground or remain only as floating dreams.
The GNU project has entered the field of AI too. http://directory.fsf.org/science/artintell/ http://www.tldp.org/HOWTO/AI-Alife-HOWTO.html
That is not correct. GNU Project along with the *GPL* is Artifical Life, with a full fledged AI system, evolving and growing strongly and steadily :)