24 (Материалы к экзамену)
Описание файла
Файл "24" внутри архива находится в следующих папках: Материалы к экзамену, faq. Текстовый-файл из архива "Материалы к экзамену", который расположен в категории "". Всё это находится в предмете "вычислительные сети и системы" из 7 семестр, которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр текстового-файла онлайн
Newsgroups: comp.parallel,comp.sys.super
From: eugene@sally.nas.nasa.gov (Eugene N. Miya)
Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)
Subject: [l/m 7/23/97] Suggested readingscomp.par/comp.sys.super (24/28) FAQ
Keywords: REQ,
Organization: NASA Ames Research Center, Moffett Field, CA
Date: 24 May 1998 12:03:07 GMT
Message-ID: <6k929r$7ja$1@sun500.nas.nasa.gov>
Archive-Name: superpar-faq
Last-modified: 23 Jul 1997
24Suggested (required) readings< * this panel * >
26Dead computer architecture society
28Dedications
2Introduction and Table of Contents and justification
4Comp.parallel news group history
6parlib
8comp.parallel group dynamics
10Related news groups, archives and references
12
14
16
18Supercomputing and Crayisms
20IBM and Amdahl
22Grand challenges and HPCC
So you didn't search TM-86000? (panel 14).
Here's the context: this is more parallel (rather than super) computing
oriented.
Every calendar year, I ask in comp.parallel for everyone's opinions
on what should people be reading. I couch this with the proviso that
the reader be at least a 1st or 2nd year grad student in computer science
or related technical field. This presumes some basic ACM CORE curriculum
knowledge like:
basic computer architecture,
compilers,
operating systems, and some numerical analysis
(some would argue: not enough, but that's a separate argument).
For better or worse, it's done numerically (a mid 1980s experiment).
Every suggester gets "10 votes."
You will see the 10 perceived "REQUIRED" readings in parallel computing
by your colleagues: and they are very good colleagues like JH and DP, DH, etc.
Disadvantages:
1) sometimes 10 votes is not enough (I made the rules, I can make
exceptions).
2) new unfamiliar books tend to take time to make it to "the top-10."
Yes, some references might be old, so vote for newer references
and encourage your colleagues to "vote" for those references, too.
3) for those we have a RECOMMENDED 100 (for recommended class
reading lists). Search (panel 14 in TM-86000) and find them.
I might make a separate FAQ panel later. Ten is enough for now.
Some people will claim "anti-votes." Sorry I have no provision for anti-votes
except to note them in annotations. Watch for them!
And if you have voted in the past and wish to change your "vote,"
just ask.
We are not doing this to sell textbooks. This is merely a yearly opinion
survey. You can suggest 10 at just about anytime (especially if you want to
N an existing endorsement, or anti, or whatever).
COME ON COME! you are long winded
-------------
Here:
REQUIRED
%A George S. Almasi
%A Allan Gottlieb
%T Highly Parallel Computing, 2nd ed.
%I Benjamin/Cummings division of Addison Wesley Inc.
%D 1994
%K ISBN 0-8053-0443-6
%K ISBN # 0-8053-0177-1, book, text, Ultracomputer, grequired96, 91,
%d 1st edition, 1989
%K enm, cb@uk, ag, jlh, dp, gl, dar, dfk, a(umn),
%$ $36.95
%X This is a kinda neat book. There are special net antecdotes
which make this interesting.
%X Oh, there are a few significant typos: LINPAK is really LINPACK. Etc.
These were fixed in the second edition.
%X It's cheesy in places and the typography is
pitiful, but it's still the best survey of parallel processing. We really
need a Hennessy and Patterson for parallel processing.
(The topography was much improved in the second edition so much of
the cheesy flavor is gone --ag.)
%X (JLH & DP) The authors discuss the basic foundations, applications,
programming models, language and operating system issues and a wide
variety of architectural approaches. The discussions of parallel
architectures include a section that describes the key concepts within
a particular approach.
%X Very broad coverage of architecture, languages, background theory,
software, etc. Not really a book on programming, of course, but
certainly a good book otherwise.
%X Top-10 required reading in computer architecture to Dave Patterson.
%X It is hardware oriented, but makes some useful comments on programming.
%A Michael Wolfe
%T Optimizing Supercompilers for Supercomputers
%S Pitman Research Monographs in Parallel and Distributed Computing
%I MIT
%C Cambridge, MA
%D 1989
%d October 1982
%r Ph. D. Dissertation
%K parallelization, compiler, summary,
%K book, text,
%K grequired91/3,
%K cbuk, dmp, lls, +6 c.compilers,
%K Recursion removal and parallel code
%X Good technical intro to dependence analysis, based on Wolfe's PhD Thesis.
%X This dissertation was re-issued in 1989 by MIT under it's Pittman
parallel processing series.
%X ...synchronization and locking instructions when compiling the
parallel procedures and those called by them. This is a bit like
the 'random synchronization' method described by Wolfe but
works with pointer-based datastructures rather than array elements.
%X Cited Chapters:
Data Dependence 11-57
Structure of a Supercomplier 214-218
%A W. Daniel Hillis
%A Guy. L. Steele, Jr.
%Z Thinking Machines Corp.
%T Data Parallel Algorithms
%J Communications of the ACM
%V 29
%N 12
%D December 1986
%P 1170-1183
%r DP86-2
%K Special issue on parallel processing,
grequired97: enm, hcc, dmp, jlh, dp, jwvz, sm,
CR Categories and Subject Descriptors: B.2.1 [Arithmetic and Logic Structures]:
Design Styles - parallel; C.1.2 [Processor Architectures]:
Multiple Data Stream Architectures (Multiprocessors) - parallel processors;
D.1.3 [Programming Techniques] Concurrent Programming;
D.3.3 [Programming Languages] Language Constructs -
concurrent programming structures: E.2 [Data Storage Representations]:
linked representations; F.1.2 [Computation by Abstract Devices]:
Modes of Computation - parallelism; G.1.0 [Numerical Analysis]
General- parallel algorithms,
General Terms: Algorithms
Additional Key Words and Phrases: Combinator reduction, combinators,
Connection Machine computer system, log-linked lists, parallel prefix,
SIMD, sorting, Ultracomputer
%K Rhighnam, algorithms, analysis, Connection Machine, programming, SIMD, CM,
%X (JLH & DP) Discusses the challenges and approaches for programming an SIMD
like the Connection Machine.
%A C. L. Seitz
%T The Cosmic Cube
%J Communications of the ACM
%V 28
%N 1
%D January 1985
%P 22-33
%r Hm83
%d June 1984
%K grequired91: enm, dmp, jlh, dp, j-lb, jwvz,
Rcccp, Rhighnam,
%K CR Categories and Subject Descriptors: C.1.2 [Processor Architectures]:
Multiple Data Stream Architectures (Multiprocessors);
C.5.4 [Computer System Implementation]: VLSI Systems;
D.1.2 [Programming Techniques]: Concurrent Programming;
D.4.1 [Operating Systems]: Process Management
General terms: Algorithms, Design, Experimentation
Additional Key Words and Phrases: highly concurrent computing,
message-passing architectures, message-based operating systems,
process programming, object-oriented programming, VLSI systems,
homogeneous machine, hypercube, C^3P,
%X Excellent survey of this project.
Reproduced in "Parallel Computing: Theory and Comparisons,"
by G. Jack Lipovski and Miroslaw Malek,
Wiley-Interscience, New York, 1987, pp. 295-311, appendix E.
%X * Brief survey of the cosmic cube, and its hardware