04 (Материалы к экзамену), страница 2
Описание файла
Файл "04" внутри архива находится в следующих папках: Материалы к экзамену, faq. Текстовый-файл из архива "Материалы к экзамену", который расположен в категории "". Всё это находится в предмете "вычислительные сети и системы" из 7 семестр, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр 2 страницы текстового-файла онлайн
expansion when possible or include the macro with the citation.
Leave it out, and you will appear like
"one of those arrogant computer scientists..." to quote a friend.
Less volatile acronyms (accepted in the community):
SISD: [Flynn's terminology] Single-Instruction stream, Single-Data stream
SIMD: [Flynn's terminology] Single-Instruction stream, Multiple-Data stream
MISD: [Flynn's terminology] Multiple-Instruction stream, Single-Data stream
MIMD: [Flynn's terminology] Multiple-Instruction stream, Multiple-Data stream
PRAM: Parallel Random Access Memory
QRQW: Queued reads and writes
EREW: Exclusive access Read, Exclusive access Write
CREW: Concurrent read, exclusive write PRAM
ASCI = Accelerated Strategic Computing Initiative
(i.e. simulating nuclear bombs, so we don't feel
compelled to blow them up in order to test them.)
ASCI Red = the Intel machine at Sandia National Labs,
consisting of >9000 200 MHz Pentium Pro cpus
in a 2-D mesh configuration.
ASCI Blue = Two systems, both targetted at 3 TFLOPS peak,
1 TFLOPS sustained:
1. A future IBM machine to be installed at Lawrence
Livermore National Labs. By the end of 1998 or
early 1999, it should be 512 SMP machines in
a message-passing cluster. Each machine is based
on 8 PowerPC 630 processors. For starters, IBM
has installed an SP-2 machine.
2. A future SGI/Cray machine to be installed at
Los Alamos National Labs. By the end of 1998 or
early 1999, it should be a 3072-cpu distributed
shared memory system, based on a future SGI/MIPS
processor. For starters, SGI has installed a
moderately large number of 32-cpu Origin 2000 systems.
Shared Memory
1. A glossary of terms in parallel computing can be found at:
http://www.npac.syr.edu:80/nse/hpccgloss /hpccgloss.html
(Most of this was taken from my IEEE P&DT article w/o my
permission, and without proper credit; the credit thing has
apparently now been fixed.)
2. My history of parallel computing is available as technical
report CSRI-TR-312 from the Computer Systems Research Institute,
University of Toronto, at:
http://www.cdf.toronto.edu/DCS/CSRI/CSRI -OverView.html
%A Gregory V. Wilson
%T A Chronology of Major Events in Parallel Computing
%R CSRI-312
%I U. of Toronto, DCS
%D December 1994
%X ftp.csri.toronto.edu cd csri-technical-reports
Remember:
http://www.ucc.ie/info/net/acronyms/acro .html
URLs
----
http://www.cray.com/# this might change
http://www.convex.com/# this might change
http://www.ibm.com/
Got the pattern?
http://spud-web.tc.cornell.edu/HyperNews /get/SPUserGroupNT.html
http://www.umiacs.umd.edu/~dbader/sites. html
http://www.cnct.com/~gunter
http://parallel.rz.uni-mannheim.de/top50 0/top500.html
Brazil Parallel Processing Homepage
http://www.dcc.ufmg.br/~kitajima/sbac-en g.html
Dataflow Webpages
http://www.csg.lcs.mit.edu/
http://odyssey.ucc.ie/www/user-dirs/oreg an/dataflow.html
dataflow-request@boole.ucc.ie
subscribe dataflow-list as user@host.site.domain (Real Name)
sub dataflow-list (unsub dataflow-list to quit list)
dataflow-list@boole.ucc.ie
Also
HPCC
Other mailing lists
-------------------
pario
sp2
Where can I find "references?"
------------------------------
BEWARE: The Law of Least Effort! (*if you need this reference, mail me.)
The references provided herein are not intended to be comprehensive for the
most part. That's the perview of a bibliography.
The major biblios I am aware:
Mine, and I will attempt to integrate the following as well
Cherri Pancake's parallel debugging biblio
David Kotz's parallel I/O biblio
H.T. Kung's Systolic array biblio
http://liinwww.ira.uka.de/bibliography/P arallel/index.html
NCSTRL Project: (from ARPA: CSTR)
http://www.ncstrl.org
and
the Unified CS TR index:
http://www.cs.indiana.edu:800/cstr/searc h
If you ask a query, and I know the answer, I might give you a quick
search off the top of the biblio, but I'm not your librarian.
I am a Journal Associate Editor for John Wiley & Sons, Inc.
If I don't answer, I don't have the time or don't know you well enough.
Knowledgeable people have up to date copies of my biblio
(and the other biblios).
If you are a student or a prof, and you assemble a biblio on some topics,
1) if you use one of these biblios: ACKnowledge that fact.
2) If you post it, separate the new entries and submit it directly to me,
if you don't, you make work busy work for those of us maintaining it because
we have to resolve entry collisions (not that as simple as you might think,
like name differences (full vs. abbreviated name, bibtex macros (w/o the
expansion [do you have any appreciation how irksome that it to some people?]).
Assembling a biblio is a fine student exercise, BUT
it should build on existing information. It should also minimize the
propagation of typos and other errors (we are all still finding them in
the existing biblios).
Notorious (frequently posted) biblio topics:
MINs (multistage interconnection networks).
Load balancing.
Checkpointing.
While clearly important, these are topics which bore and upset some people
(ignore them, they can hit 'n' on their news system). You are supposed to
kill file this FAQ after reading it (subject to last modified dates,
of course).
Some very telling personal favorite quotes from the literature of
---------------------------------------- -------------------------
parallel processing:
--------------------
[Wulf81] describes the plight of the multiprocessor researcher:
.(q
We want to learn about the consequences of different designs on
the useability and performance of multiprocessors.
Unfortunately, each decision we make precludes us from exploring its
alternatives. This is unfortunate, but probably inevitable for hardware.
Perhaps, however, it is not inevitable for the software....
and especially for the facilities provided by the operating system.
.)q
[Wulf81, pp. 276]:
.(q
In general, we believe that it's possible to make two major mistakes at the
outset of a project like C.mmp. One is to design one's own processor;
doing so is guaranteed to add two years to the length of the project and,
quite possibly, sap the energy of the project staff to the point that nothing
beyond the processor ever gets done. The second mistake is to use someone
else's processor. Doing so forecloses a number of critical decisions, and thus
sufficiently muddies the water that crisp evaluations of the results are
difficult. We can offer no advice. We have now made the second mistake\**
\*- for variety, next time we'd like to make the first! Given the chance, our
processor would:
.(f
\**[Wulf81]: Twice, in fact.
The second multiprocessor project at C-MU, $Cms$, also uses the PDP-11.
.)f
.(b F
Be both inherently more reliable and go to extremes not to propagate errors;
once an error is detected, it would report that error without further effect
on the machine state.
Provide rapid domain changing; we see no inherent reason that this should
require more than, say, a dozen instruction times.
Provide an adequate address space; actually, rather than a larger number of
address bits, we would prefer true capability-based addressing [Fabry74] at
the instruction level since this leads to a logically infinite address space.
.)b
.)q
"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other reason -- including blind
stupidity." -- Wm. A. Wulf
Make it work first before you make it work fast.
--Bruce Whiteside in J. L. Bentley, More Programming Pearls
Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)
Archive: http://www.hensa.ac.uk/parallel/internet /usenet/comp.parallel