12 (Материалы к экзамену)
Описание файла
Файл "12" внутри архива находится в следующих папках: Материалы к экзамену, faq. Текстовый-файл из архива "Материалы к экзамену", который расположен в категории "". Всё это находится в предмете "вычислительные сети и системы" из 7 семестр, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр текстового-файла онлайн
Newsgroups: comp.parallel,comp.sys.super
From: eugene@sally.nas.nasa.gov (Eugene N. Miya)
Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)
Subject: [l/m 4/8/97] Who runs the ||ism-comunity? -- comp.parallel (12/28) FAQ
Organization: NASA Ames Research Center, Moffett Field, CA
Date: 12 Mar 1998 13:03:08 GMT
Message-ID: <6e8med$leq$1@cnn.nas.nasa.gov>
Archive-Name: superpar-faq
Last-modified: 8 Apr 1997
12Who runs the ||ism-community?
14References
16
18Supercomputing and Crayisms
20IBM and Amdahl
22Grand challenges and HPCC
24Suggested (required) readings
26Dead computer architecture society
28Dedications
2Introduction and Table of Contents and justification
4Comp.parallel news group history
6parlib
8comp.parallel group dynamics
10Related news groups, archives, test codes, and other references
Why this panel?
---------------
One man's research is another man's application.
A significant undercurrent belies high performance computing:
The lines of communication between computer system builders and applications
are something like the "war" between men and women.
Alan Turing never met John Gray [Men are from Mars, Women are from Venus].
Users can't understand what's taking so long and why programming is so hard.
Programmers and architects, perenial optimists [except me, the Resident Cynic]
always promise more and tend to deliver late.
This panel needs a lot of work, because I have exposure to a limited set of
communities including three and four letter agencies, people and friends
in the physics, chemistry, biology, and earth science communities in
academica and industry. etc. etc. Add what you want.
How parallel computing is like an elephant (with blindmen)
---------------------------------------- ------------------
Who runs the computer industry?
-------------------------------
A little road map
-----------------
Programmers are from Mars.
Users are from Venus.
--E. Miya, March 1996, DL'96
God didn't have an installed base.
--Datamation ??
This section attempts to cover topics relating to various sub-cultures
in the high-performance computing market. If you don't understand something,
you aren't alone. If you think you understand something, you don't.
These topics are long standing (from net.arch in the pre-comp.parallel
and pre-comp.sys.super days).
If parallel computing is a business, are the customers always right?
Scale
-----
How would you like programs to run twice as fast? How about 10% faster?
Not as impressive? In this group, factors of 2-4 aren't impressive.
>From my knowledge, agencies like the Dept. of Energy (formerly ERDA and AEC),
factors of 8-16 (around 10) interest people. Keyword: EXPECTATION:
At 3 MIPS, the CDC 6600's appearance was 50x faster than its predecessor
the ERA/UNIVAC 1604. WE DO NOT SEE THIS DEGREE OF GAIN CONSISTENTLY.
With that reference, we proceed.
Clearly, smaller numbers (improvements %s or 2-4x) speed ups are useful
to some users, but this is illustrative of the nature of Super-scales.
Traditional computer science teaches about the time-space trade-off
in computation. SUPERCOMPUTING DOESN'T ALLOW TRADEOFFS.
We must distinguish between
ABSOLUTE performance (typically measured by wall-clock)
RELATIVE performance (normalized or scaled percentage (%))
If you have a problem and you can't trade one for the other:
then it MIGHT be a supercomputing problem.
Run time tends to either too small or too slow: then it might not be super
anymore. The definition is a moving wave.
Problem scale: space: problems O(n^2) and O(n^3) are common. O(n^4)
from things like homogeneous coordinate systems are increasingly common.
Remember: parallel computing is an O(n) solution to the above O-notation.
Problem scale: time: O(n^3) and greater thru NP-complexity
One popular line FAQ (sci.math) is proving P == NP complexity.
That won't be covered here. (E.g., the Cray-XXX [choose model number
can execute an infinite loop in 10 seconds [some finite figure].
Yes, people have posted that joke here.)
Processors are typically scaled (added) at O(n) or at best O(n^2).
Any improvements must be viewed realistically, bordering on skeptical.
This is why claims of superlinear speed up (properly super-unitary) should
be viewed with great skepticism. Clearly, people working in this area
need the proverbal pat on the back, but giddy claims only serve to hurt
the field in the long run.
Nothing like showing 2-D computational results,
when end-user customers work on 3-D problems.
Additionally, people tend to assume synchronous systems.
Asynchronous systems are even more "fun."
Let's bring cost into the discussion:
Since the 1970s, it has been realized that complete processor or memory
connectivity (the typical example give was a cross-bar) scales by O(n^2)
interconnections.
Over time, various multistage interconnection networks (MINs)
have scaled this down to variations around O(n ln n)
[interpreted: this is still more than O(n)].
This contrasts with the perceived dropping cost of electronics
(semiconductor substrate). See the Wulf quote about CPUs on an earlier panel.
This is not a problem for small scale (4 to 8-16 processors,
aren't you glad you got those numbers?) on a bus, but investigators
(users) want more power than that. It is VERY hard to justify these
non-linear costs to people like Congress:
"You mean I pay for 8 processors and I get 4x the performance?
You have some serious explaining to do son."
This brings up the superlinear speed up topic (more properly called
"superunitary speed up"). That is another panel.
What are some of the problems?
----------------------------------------
The technical problems interact with some of the economic/political problems.
First comes consistency. You take it for granted. (Determinism)
Say to yourself,
asymmetric
boundary conditions
exception handling
consistency
starvation
dead lock
state of the art limits of semiconductors
If you are not in the computing industry, you might be confused by
time scales. Silicon Valley does not operate on the same wavelengths
or time scales as other parts of the world.
It is estimated that the US Government takes an average of 430 days (1990)
to purchase a supercomputer. The typical memory chip has a useful commercial
life time of two years before it is succeeded by better technology.
Fields like astronomy and glaciology may take a year or two to referee
some papers.
Refereed papers in computing are frequently the exception (obsolete)
and seminars and conferences tend to hold more weight
(including personal followup). The speed at which some ideas are discarded
can be particularly fast.
Many parts of the computer community tend to assume their users' environments
behave very much like their own. This is usually not the case.
This is why I value my contacts outside the computer industry.
A funny relationship exists.
Traditional science has been characterized by theory and experiment.
In the late 1980s, several key Nobel Laureates (remember that there are
no Nobel Prizes for Math or Computing) starting with people
like Ken Wilson and continuing the tradition with Larry Smarr
have argued for a third part: computational science.
The silent majority in many sciences which rebuts sometime saying:
Any field which has to use 'science' in its name isn't one.
[R.P. Feynman Lectures on Computation]