18 (Материалы к экзамену), страница 9
Описание файла
Файл "18" внутри архива находится в следующих папках: Материалы к экзамену, faq. Текстовый-файл из архива "Материалы к экзамену", который расположен в категории "". Всё это находится в предмете "вычислительные сети и системы" из 7 семестр, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр 9 страницы текстового-файла онлайн
The three things most needed in a Supercomputer are fast arithmetic,
large memories, equally fast I/O, and a usable, robust operating
environment.. {So I can't count ... either.} A typical problem may
require between 10^14 and 10^17 arithmetic operations; clearly, the
floating point unit has to be big and fast, and so does the memory
(typically each floating point operation is responsible for 24 bytes
of memory traffic.) * Very large memories are needed to
accommodate the billions of WORDS needed for each data set, and the
memory traffic on average has to move 3 words (24 bytes: 16 in and 8
out) per floating point operation..
It is typical that every data point is re-computed every cycle and
it is usual that on each cycle each point requires tens of thousands
of arithmetical operations. Often there is an enormous gush of
output. Number ranges are often too big for anything other than
floating point. Notwithstanding, there is always a problem of
retaining some numerical significance so multiple precision
arithmetic is vital. Perhaps you can guess that there is just a
limited number of genuine Supercomputing applications, but even this
number shrinks dramatically when we try to see who is willing to pay
for developing a given problem.
Given that there is a problem that needs to be run on a Supercomputer,
it should be obvious that it won't run constantly, so hundreds or
thousands of smaller problems can be run during such gaps. A common
mistake among certain critics is to say that this is a waste of
Supercomputer resources, or that it is simply not cost-effective. It
should be clear from the foregoing that the criticism is simply
wrong-headed. Supercomputing is not defined by these little problems,
but they can and do benefit enormously by being run on a big machines.
A little problem on a big machine is easily managed, but it becomes a
big problem on a little machine, So, instead of wasting a lot of time
trying to figure our how to fit it into a small or otherwise inadequate
machine, the developer is free to concentrate on making the problem do
what he wanted it to do. (I admit that some people like to squeeze
every last twitch out of a computer. To others, such behavior is
unseemingly for real gentlemen.)
The choice of the price of a Supercomputer is not clearly useful. To
see this, we review some (cynical) definitions of Supercomputing.
1: The first idea we considered is that a Supercomputer is that
machine which will run your problem the fastest. Notice it's your
problem, and it's not necessarily the machine you are currently using,
and the concept of "fastest" is one you should control. You might
reasonably decide that it's total through-put time that matters rather
that how fast one can do arithmetic.
2. Showing some real insight, Ken Batcher stated that,
"A Supercomputer is one that will change your compute-bound problem into
one that is I/O-bound." Notice here that the limitations related to
I/O are intrinsically tied to the nature of the problem. Words like
"entry-level," and "mini-super" have entered the lexicon via
over-zealous marketers and salesmen in order to sell things that are
not necessarily supercomputers. (The term itself first appeared about
twelve years ago. Who first used it is subject to argument, but Jack
Worlton was one of the first to speak about such machines, and today is
a leading advocate in trying to change the term to (Ultra) High Speed
or Performance Computing, because as he notes, marketers have
thoroughly trashed the original meaning of "Supercomputer.")
Typically the inadequacies of small computers reside largely in their
inability to support adequate memory traffic and FAST I/O.
3. Still another kind of insight: Neil Lincoln says that a
Supercomputer is one that is only one generation away from being what
you really need.
We can observe here that it is the application that decides if you're
using a Supercomputer, and finally to put extra stress on the idea that
"the concept of "fastest" is one you should control." consider what you
would hold important if all the arithmetic in your problem took zero
time. How fast would your problem run, and now what would you do to
make it go faster.
I hereby apologize for the obscene length of this polemic.
Cray Research managed to build it's first shippable Cray 1 for
under $10Million. Of course, it did not have much software.
Convex managed to build it's first shippable C-1 for around $20 Million,
including software.
Others have also gotten to market with innovative hardware for a lot less
than $100 Million. Critical issues for startups:
1) Know what your initial target market is and understand what it requires.
2) Maintain focus and don't allow significant investment in anything that
does not aid that market.
3) Do everything you can to minimize time to market. Every extra month just
burns more money.
4) Keep your initial staff small (just large enough to do the job).
More people mean more time spent communicating instead of doing.
If you are very selective in hiring, only accepting the top 10% of
potential candidates, and motivating them with "average salaries and
extraordinary stock options, plus exciting work", you should be able
to get several times industry average effort and productivity.
5) Use other people's work whenever possible - like starting with Unix as
an OS instead of inventing your own. Look for strategic partnerships
in as many areas as possible. These efforts will help (3) and (4).
6) Be lucky. :-)
[It helps to be first one to market in your niche, or for your target
market to experience a business boom just as your product is ready.
It also helps for your vendors to deliver what they promise. The
reverse of any of these can sink you. Examples abound.]
Memorial contributions be
made to the Pikes Peak Area Trails Coalition, 1426 N. Hancock, Suite 4,
Colorado Springs, CO 80903 or the Seymour R. Cray memorial at the
University of Minnesota.
The following interview with Seymour Cray appeared in the December 1982
issue of Datamation magazine. The interviewer, Jan Johnson, posed the
same questions to four people: Gene Amdahl, Victor Poor, James Thornton,
and Seymour Cray. There were also follow-up questions addressed to
individuals. Transcribed below are the questions addressed to Cray, and
his answers. All ellipses are in the text as it appeared in Datamation.
A lot of Cray's answers are memorable.
Tom Ace
tea@netcom.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Datamation: What technological developments in the past five to 10 years
have had the biggest impact on your niche of the computer industry?
Cray: I guess there haven't been any.
Datamation: No developments in mathematics, architecture, or in new ways
of looking at things?
Cray: Well, the problem I have with probably most of these questions is
that I don't pay much attention to what is going on in the world. I just
do my own kind of work, so if there were something new in mathematics, I
wouldn't know about it.
Datamation: What have been some of the driving forces behind the changes
in your niche of the industry?
Cray: If Fairchild would quit trying new technology out on us, we'd get
our parts a lot faster. They are always giving us this new technology
and of course it doesn't work, so they have to keep trying it again. It
seems like a real deterrent to getting our job done because it's never
necessary to have the new technology.
Datamation: What do you call state of the art?
Cray: I suppose that it is whatever you can do.