Man: clock rate of 2KHz; million million processors, can solve sophisticated problems in less than 400 cycles (under ideal conditions ;), can do complex imaging, audio processing etc with available sensory intelligence
Machine: clock rate in the range of Teraflops (super computers); few processors; can have very wide sensory intelligence but cannot do complex processing; and "problem solving" capabilities are not really worth mentioning
Suppose I want my application to run 40 to 200 queries on a db, all concurrently, to take a simple decision, brain like parallel processing schemes/devices seems useful. The recent 64 bit processors seem to come with capacities for very large RAM, etc, and I am just curious if brain emulators are possible now. "Supercomputers" seem to be more focused on increasing clock speeds, but it may be worthwhile to also focus on increasing the number of threads/processors we could use to the range of million millions trading off with clock rate.
I am not sure about technical feasibility etc., but this is one of the "unsolved" areas of cs, that may be of interest to researchers :)
Ref: http://www.electricminds.org/ussclueless/essays/futurecs.htm
,----[ Ramanraj K ramanraj.k@gmail.com ] | Suppose I want my application to run 40 to 200 queries on a db, all | concurrently, to take a simple decision, brain like parallel | processing schemes/devices seems useful. The recent 64 bit | processors seem to come with capacities for very large RAM, etc, and | I am just curious if brain emulators are possible now. | "Supercomputers" seem to be more focused on increasing clock speeds, | but it may be worthwhile to also focus on increasing the number of | threads/processors we could use to the range of million millions | trading off with clock rate. | | I am not sure about technical feasibility etc., but this is one of | the "unsolved" areas of cs, that may be of interest to researchers | :) | | Ref: http://www.electricminds.org/ussclueless/essays/futurecs.htm `---- Supercomputer manufacturers are *not* focussed on increasing clock speed, instead they try to maximize FLOPs (Floating Point Operations) / second for a given budget.
If you look at Top500.org, they consider the following factors to grade a supercomputer. Rmax Maximal LINPACK performance achieved Rpeak Theoretical peak performance Nmax Problem size for achieving Rmax N1/2 Problem size for achieving half of Rmax
To calculate RPEAK use the following formula: Rpeak ~= Number of Processors * Clock Speed * FLOPS/Cycle (or Number of FPUs).
Thunder Example: 4096 x 1.4GHz x 4 ~= 23 TFlops. CDC6440 has 4FPUs
Rpeak/Rmax ~= Efficiency.
Top500.org's way of ranking a supercomputer is not adequate. They fail to consider some of the important factors like efficiency, price/performance ratio, manageability and TCO. Actual design of supercomputer is very much dependent on the application needs. Best way to benchmark is to use the real world applications and measure performance. For example applications that are embarrassingly parallel doesn't really need an expensive high performance low latency interconnect such as Infiniband. GigE will do fine.
On the other side, if you are taking about processor clock speed.. recently processor manufacturers have realized it makes more sense to maximize the number of cores in a processor than frequency. Dual core procs are already out there it the market. Quad and Octa are in development. Your argument is valid and Industry is certainly moving in that direction.
Speaking about supercomputers replacing human brain, we really don't know enough about human brain. We don't even have a good acceptable definition for Artificial Intelligence. Its a long way to go to for computers to achieve human like intelligence (30-50 years may be).
But with the capabilities of today's computing hardware, we can sure replace lot of routine human tasks smartly. Mostly in the field of automation. Software for AI is vastly behind. I very much agree with you on this subject. Software engineers are more concerned about parsing XML stuff or writing web applications, instead of solving the real problem of making machines intelligent and they perform the tasks themselves instead of we programming them every time.
The GNU project has entered the field of AI too. Look at http://directory.fsf.org/science/artintell/ http://www.tldp.org/HOWTO/AI-Alife-HOWTO.html
On 8/5/05, Anand Babu ab@gnu.org.in wrote:
,----[ Ramanraj K ramanraj.k@gmail.com ] | Suppose I want my application to run 40 to 200 queries on a db, all | concurrently, to take a simple decision, brain like parallel | processing schemes/devices seem useful. The recent 64 bit | processors seem to come with capacities for very large RAM, etc, and | I am just curious if brain emulators are possible now. | "Supercomputers" seem to be more focused on increasing clock speeds, | but it may be worthwhile to also focus on increasing the number of | threads/processors we could use to the range of million millions | trading off with clock rate. | | I am not sure about technical feasibility etc., but this is one of | the "unsolved" areas of cs, that may be of interest to researchers | :) | | Ref: http://www.electricminds.org/ussclueless/essays/futurecs.htm `---- Supercomputer manufacturers are *not* focussed on increasing clock speed, instead they try to maximize FLOPs (Floating Point Operations) / second for a given budget.
If you look at Top500.org, they consider the following factors to grade a supercomputer. Rmax Maximal LINPACK performance achieved Rpeak Theoretical peak performance Nmax Problem size for achieving Rmax N1/2 Problem size for achieving half of Rmax
To calculate RPEAK use the following formula: Rpeak ~= Number of Processors * Clock Speed * FLOPS/Cycle (or Number of FPUs).
Thunder Example: 4096 x 1.4GHz x 4 ~= 23 TFlops. CDC6440 has 4FPUs
Rpeak/Rmax ~= Efficiency.
Top500.org's way of ranking a supercomputer is not adequate. They fail to consider some of the important factors like efficiency, price/performance ratio, manageability and TCO. Actual design of supercomputer is very much dependent on the application needs. Best way to benchmark is to use the real world applications and measure performance. For example applications that are embarrassingly parallel doesn't really need an expensive high performance low latency interconnect such as Infiniband. GigE will do fine.
I am surprised that Top500 ranking is solely about ability of machines to do floating-point calculations based on the Linpack benchmark available at http://www.netlib.org/linpack/ More fundamentally, why is that we still have not moved to fixed-point representations, inspite of great improvements in technology? It is a little unsettling for me, as I need to use rational numbers, and I need fixed-point representations to be assured that the scheme works.
Dynamism in Law is very well known, and an accepted fact of life. Change is the law of nature and everything, except of course the law of change itself, is subject to change. Law keeps pace with various new developments, and therefore the Law is in a constant state of flux, and Legal rules have to be stored and executed .
Gaming rules and legal rules have many similarities, from computing point of view, but there is a very important and significant difference: The rules of chess would not change atleast while a game is in progress, but life with law is dynamic. Imagine you have to write a program in such a way that the rules of chess could change while a game is in progress. The Game of Life is governed by the Law, and the Law of Dynamism requires us to devise schemes
Assuming that a set of rules could run in a linearly we could sequentially number them and code as follows: 1. Do this 2. Do that 3. Do foo
Now, if a new rule has to be introduced between 1 and 2, the easiest way to do it is to simply introduce it between 1 and 2 and assign the new rule a rational number with a simple division (1 + 2) / 2 = 1.5 This may be done ad infinitum times, and we can elegantly handle the dynamic nature of rules. I can sort the rules as they are sequentially stored, and have good control over the rules. In this scheme, it makes no sense to store 1.5 with floating point representation and it needs a fixed point representation. This is necessary to avoid duplicate rows, rules, ambiguity, etc
Apart from the need above, it is fairly important to have good representations for numbers on computers if we desire to make any serious progress. Since the processors themselves are using FPU's, computer applications seem to reflect the same. Probably, there were design limits 20 years ago, but I wonder if they still exist. We should have Rational Number Units that deal with fixed point representaions, and return inf if sys limit is reached.
On the other side, if you are taking about processor clock speed.. recently processor manufacturers have realized it makes more sense to maximize the number of cores in a processor than frequency. Dual core procs are already out there it the market. Quad and Octa are in development. Your argument is valid and Industry is certainly moving in that direction.
Speaking about supercomputers replacing human brain, we really don't know enough about human brain.
Daksinamurti Stotra begins plainly with a simple truth: ".. the world is within oneself even as a city reflected in a mirror is, but projected as if it is outside, by maya, as in dream". I did not understand this clearly at once, but when I latter saw the truth there, I was shocked. "The Matrix" has capitalized on this :) An intro to the movie is avaliable at http://awesomehouse.com/matrix/intro.html
Our ancients, having realised several truths, have well analysed and proposed several theories testing and proving them through practice: All that wealth is very much available and as the physiology of the brain becomes more transparent to scientists, the theories would get confirmed.
We don't even have a good acceptable definition for Artificial Intelligence. Its a long way to go to for computers to achieve human like intelligence (30-50 years may be).
"Arivu" in Tamil stems from the root word "Ari" meaning "to know". Knowledge is through the sense organs, and they are the seats of intelligence. Nanool considers even trees to be endowed with intelligence based on its sense of touch Higher forms have more sensory organs, but all of them are based on "touch". Our brain is loosely coordinating input from and issuing output commands with help from a sixtth sense that we call by various names. Basic goals for us include assuring ourselves of good food, clothing, shelter etc. AI for computer would essentially mean what goals *we* set for *it*. AFAIK, the legal rules are the best source of what is widely accepted as "common sense", atleast it provides a very strong basic foundation for building better systems. What I currently see is only attempts to build AI systems without the basic foundation, and therefore, they seem to fall to the ground or remain only as floating dreams.
The GNU project has entered the field of AI too. http://directory.fsf.org/science/artintell/ http://www.tldp.org/HOWTO/AI-Alife-HOWTO.html
That is not correct. GNU Project along with the *GPL* is Artifical Life, with a full fledged AI system, evolving and growing strongly and steadily :)
Apologies for digressing from the list charter...
Anand Babu wrote:
Actual design of supercomputer is very much dependent on the application needs. Best way to benchmark is to use the real world applications and measure performance.
One question that I always had about super computers are that, do they need specialized programs to take advantage of it's processing power? For example, I have a program for computing X that is running in my existing cluster (say Beowulf), can I run that application in thunder and take advantage of it's speed, or do I have to start writing it from scratch to run in thunder?
No where in super computer literature, I have read about writing applications for it, except for libraries like PVM. It will be interesting to know about how real world applications are written for supercomputers.
Speaking about supercomputers replacing human brain, we really don't know enough about human brain. We don't even have a good acceptable definition for Artificial Intelligence. Its a long way to go to for computers to achieve human like intelligence (30-50 years may be).
As the quote goes, Artificial intelligence is no match for natural stupidity :)
raj
On 8/7/05, Rajkumar S s_raj@flashmail.com wrote:
Apologies for digressing from the list charter...
Good that you found your way to the list. Now tell us what new hot projects are up and running at Sarovar ;)
Anand Babu wrote:
Actual design of supercomputer is very much dependent on the application needs. Best way to benchmark is to use the real world applications and measure performance.
One question that I always had about super computers are that, do they need specialized programs to take advantage of it's processing power? For example, I have a program for computing X that is running in my existing cluster (say Beowulf), can I run that application in thunder and take advantage of it's speed, or do I have to start writing it from scratch to run in thunder?
No where in super computer literature, I have read about writing applications for it, except for libraries like PVM. It will be interesting to know about how real world applications are written for supercomputers.
AFAIK, the Supercomputer that may be a cluster of 1000+ individual computers appears as a single system to its system administrator/end user. Chaos like applications manage your other applications. ( http://www.purehacking.com/chaos/ ) We could be using OpenOffice, Postgres or any other application as we would on any other system. This much I digressed from AB, and also, once I get my hands on such a machine, I'll update more details :)
Speaking about supercomputers replacing human brain, we really don't know enough about human brain. We don't even have a good acceptable definition for Artificial Intelligence. Its a long way to go to for computers to achieve human like intelligence (30-50 years may be).
As the quote goes, Artificial intelligence is no match for natural stupidity :)
Marvin Minsky pronounced AI to be "brain-dead" some time back, but that is not the same as being stupid :)
Ramanraj K wrote:
Good that you found your way to the list.
:)
Now tell us what new hot projects are up and running at Sarovar ;)
We are going strong, in fact approved two projects just now.
By the automatic activity count, top 5 are:
( 100% ) Rhyme Factory ( 97% ) GRUB for DOS ( 95% ) PSTricks Tutorial ( 93% ) pdftex ( 91% ) LuitLinux - a bootable Live CD distro
Here are some more numbers...
Month, Unique visitors,Number of visits,Pages, Hits Jan 2005 20943 28952 122614 562474 Feb 2005 20701 27547 127607 536718 Mar 2005 21859 28953 121504 562670 Apr 2005 19292 25546 108562 509937 May 2005 20322 28600 141857 549787 Jun 2005 18658 25703 112333 485125 Jul 2005 18428 25045 130188 508551 Aug 2005 5289 6439 38456 138517
We are averaging 5 lakhs hits per month!
raj
On 8/8/05, Rajkumar S s_raj@flashmail.com wrote:
Ramanraj K wrote:
...what new hot projects are up and running at Sarovar...
We are going strong, in fact approved two projects just now.
By the automatic activity count, top 5 are:
( 100% ) Rhyme Factory ( 97% ) GRUB for DOS ( 95% ) PSTricks Tutorial ( 93% ) pdftex ( 91% ) LuitLinux - a bootable Live CD distro
Here are some more numbers...
Month, Unique visitors,Number of visits,Pages, Hits Jan 2005 20943 28952 122614 562474 Feb 2005 20701 27547 127607 536718 Mar 2005 21859 28953 121504 562670 Apr 2005 19292 25546 108562 509937 May 2005 20322 28600 141857 549787 Jun 2005 18658 25703 112333 485125 Jul 2005 18428 25045 130188 508551 Aug 2005 5289 6439 38456 138517
That is impressive.
Also, I visited http://sarovar.org/ and saw that Sarovar is hosting about 317 projects as on date, and they are categorised at: http://sarovar.org/softwaremap/trove_list.php 70+ projects have crossed the Beta stage, and they could also be listed as projects from India in the lists FN, Arky and others are compiling and maintaining.
Thanks, Ramanraj.
On Mon, 2005-08-08 at 07:41 +0530, Ramanraj K wrote:
Also, I visited http://sarovar.org/ and saw that Sarovar is hosting about 317 projects as on date, and they are categorised at: http://sarovar.org/softwaremap/trove_list.php 70+ projects have crossed the Beta stage, and they could also be listed as projects from India in the lists FN, Arky and others are compiling and maintaining.
We had added some projects, but due to storage of hands a lot of projects are not yet listed on Directory of Free Software Projects http://bangalore.gnu.org.in
It would great if developers and volunteers join and keep the list up-to-date.
Cheers