Для студентов МГУ им. Ломоносова по предмету Вычислительные сети и системыМатериалы к экзаменуМатериалы к экзамену 2019-09-18СтудИзба

Ответы: Материалы к экзамену

Описание

Описание файла отсутствует

Характеристики ответов (шпаргалок)

Учебное заведение
Семестр
Просмотров
55
Скачиваний
0
Размер
150,07 Kb

Список файлов

02

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 5/7/97] Intro/TOC/Justification comp.parallel (2/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 2 Mar 1998 13:03:13 GMT

Message-ID: <6deamh$dnd$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 7 May 1997

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

Comp.sys.super Official (as it gets) Motto:

If you fit in L1 cache, you ain't supercomputing :-)

--Krste Asanovic <krste@ICSI.Berkeley.EDU>

Dave Bader helped out a lot.

While the panels of this FAQ are considered in the public domain

(this is because it's from a *.gov site),

its use in other media (except references) should be checked (some

portions should not be reproduced w/o further permission).

This document can be freely redistributed subject to conditions.

Contributions to this FAQ are by default considered anonymous unless

otherwise requested or noted. One notable parallelism biblio author

started signing his annotations and quickly discovered the problems

inherent with IDing oneself to critical (but true) commentary.

I am willing to take the flak and flames and act as intermediary.

Non-anonymous contribtions can take initials, full name, or

with enough cash, NAME_IN_LIGHTS.

"Trust me." --Prof. Indiana Jones

Justification

=============

This is an experimental FAQ (it's like a parallel computer: some parts

are going to work and others won't). See what else you can figure out

I'm testing.

It's designed this way like a light house beacon. Some people will like this

and others won't (Burton Smith is on my side, so I'm not worried whether

I'm doing the right thing or not). 8^) The FAQ is a regular signal

autoposted bidailey. And it should grow over time to daily but

each a different part with a common index/TOC.

Over the course of a month, you should receive 14 or 28 or some number

of pieces. If you don't, you have some idea how unreliable your news feed

is from the San Francisco Bay Area (Santa Clara/Silicon Valley).

I also use these Subject lines like index tabs for searching.

Parts make version control easier. But if you lack a part on your system,

you might be a little annoyed. Also smaller parts make transmission more

likely (common defaults being 100 KB max articles, and even smaller than

30 KB). No one ever said that net.News was a reliable service.

You are encouraged to Killfile this post (or skip it as you need).

The Subject line designed deliberately:

[l/m 1/26/96] AUTOMATED TEST comp.parallel (2/28) FAQ

This is the easiest way to Killfile this monthly post. Most news readers

Killfile only the first 24 characters, in this case, it's a Last modified

date and some unique text. You only have to issue a Kill command once,

and you only get notified by your reader when changes take place.

Clever? huh? Some people are sick of the format. Complain to

news interface writers.

If you hate FAQs on general principle, you will have to edit your killfile

(maybe, you hate me personally [tough, you have to figure out that edit

for yourself]), or maybe you hate the FAQ concept, then you edit the

Killfile with the common trailing string: "FAQ$".

You know regular expressions (the algebra of strings) right?

The $-sign? That's it.

Don't complain to me if you have one of those poor news systems which

only allow 24 characters total in the Subject field. Complain to your

Service Provider. Get a full function news reader.

This is an FAQ like all FAQs, if you are part of the community,

you are willing to make or eat moose turd pie. If you sit back as a

merely lurker, you aren't going to get very much out of it.

It's a participatory technology. If you want anonmity (due to where you

work, what you do, etc.). It might be possible to work something out.

Quote:

"The best way to get information on Usenet is not to ask

a question, but to post the wrong information."

It's an attempt at community memory which dates to the early 1980s.

The Ironies of Parallel Computing

=================================

On run time: It's interesting that for their inaccuracies, etc.

weather simuations attempt to run faster than the real weather and

nuclear bomb simulations take longer than the real phenomena.

It always bothers me that, according to the laws as we understand

them today, it takes a computing machine an infinite amount of logical

operations to figure out what goes on in no matter how tiny a region of

space and no matter how tiny a region of time. How can all that be going

on in that tiny space? Why should it take an infinite amount of logic

to figure out what one tiny piece of time/space is going to do?

--R. P. Feynman, The Character of Physical Law

On to the content. See you in two days.

A certain comedian named Utah Phillips has a story about "Moose Turd Pie."

This story is occasionally telecast, and UP retired from the comedy circuit.

In a younger life, Utah worked on a railroad. He was what was called

a "gandy dancer," a man who used a lever to raise a rail so that people

could work on rail-road ties and the road bed. Well, one of the

unspoken jobs was that of cook. It was unspoken because it was not in

the job descriptions of gandy dancers and some one had to cook or people

would starve.

The result is that the person who complained the most got to cook the

meals. So Utah was made cook. He hated it.

One day, UP was crossing a meadow when he found a giant moose turd (fece).

UP resolved that he was tired of being the cook.

So he rolled the turd back to the caboose (he was much more descriptive

of this), where he made a beautiful pie shell, plopped the turd inside, and

covered it with a beautiful crust.

Over dinner, the other workers ate their food. One of the meanest looking,

most powerful guys was there. They were expecting a good dinner after a

hard day of work. UP gave everyone slices of the pie without telling them.

He was hoping to jog some to take the job from him. The big mean guy

took his fork and placed a big slice into his mouth and started chewing.

The big mean guy burst out and shouted:

"My God, that's moose turd pie!"

Long pause.

"But it's good."

Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)

Archive: http://www.hensa.ac.uk/parallel/internet /usenet/comp.parallel

04

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 10/22/97] group history/glossary comp.parallel (4/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 4 Mar 1998 13:03:14 GMT

Message-ID: <6djjei$1kt$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 22 Oct 1997

4Comp.parallel news group history, glossary, etc.

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

News group history

==================

Comp.parallel began as a mailing list specifically for

Floating Point Systems T-series hypercubes in the late 1980s by

"Steve" Stevenson at Clemson University. Later, the news group was

gatewayed (originally) as comp.hypercube. About six months into it,

someone suggested that the news group be all parallel stuff.

That's when it was changed (by democratic vote, to be sure) to the

moderated Usenet group comp.parallel.

Comp.parallel distinguished itself as one of the better Usenet groups

with a high "signal to noise" posting ratio.

Prior to comp.parallel, parallel and supercomputing were discussed in

the unmoderated Usenet group comp.arch (poor signal to noise ratio).

[aka high performance computing]

I forget (personally) the discussion which went along with the creation of

comp.sys.super and comp.unix.cray. It is enough to say that "it happened."

Comp.sys.super started as part of the "Great Usenet Reorganization"

(circa 1986/7).

C.s.s. was just seen as part of the existing sliding scale of

computer performance (from micros to supers).

Minicomputers (16-bit LSI machines) started disappearing about this time.

Where's the charter?

====================

It's going to be substituted here.

What's okay to post here?

=========================

Most anything relating parallel computing (comp.parallel) or

supercomputing (comp.sys.super, but unmoderated). Additionally, one

typically posts opinions about policy as relating to running the news group

(i.e., news group maintenance). Largely, it is up to the moderator in

comp.parallel to decide what ultimately propagates (in addition to the

usual propagation problems [What? you expect news to be propagated reliabily?

I have bridge to sell and some land in Florida which is occasionally

above water.]).

We are not here to hold your hand. Read and understand the netiquette posts

in groups such as news.announce.newusers (or de.newusers or similar groups).

Netiquette != etiquette.

Netiquette ~= etiquette.

Netiquette not = etiquette.

NETIQUETTE .NE. ETIQUETTE.

Avoid second and third degree flames: no pyramid posts or sympathy card calls.

Sure some one might be dying, but that's more appropriate in other groups.

We have posted obits and funeral notices (e.g., Sid Fernbach, Dan Slotnick).

No spam. We will stop spam especially cross-posted spam.

Current (1996) SPAM count to (comp.parallel): growing.

Current (1996) SPAM count to (comp.sys.super): more than c.p.

The spam count is the number of attempts to spam the group which get

blocked by moderation.

One more note:

Good jokes are always appreciated. Is it Monday?

GOOD JOKES.

Old joke (net.arch: 1984) with many variants:

In the 21st Century, we will have greater than Cray-1 power

with massive memories and huge disks, easily carryable under the arm

and costing less than $3000, and the first thing the user asks:

"Is it PC compatible?"

Guidance on advertising:

------------------------

Keep it short and small. This means: post-docs, employment, products, etc.

Don't post them too frequently.

What's okay to cross-post here?

-------------------------------

Your moderators are in communication with other moderators.

Currently, if you cross-post to two or more moderated news groups,

a single moderator can approve or cancel such an article.

Mutual agreements for automatic cross-post approval have been

negiotated with:

comp.compilers

comp.os.research

comp.research.japan

news.announce.conferences (moderator must email announcement to n.a.c.

moderator)

Pending

comp.doc.techreports

You are free to separately dual post (this isn't a cross-post) to

those moderated news groups.

Group Specific Glossary

=======================

Q: What does PRAM stand for?

Confused by acronyms?

---------------------

http://www.ucc.ie/info/net/acronyms/acro .html

The following are noted but not endorsed (other name collisions possible):

Frequent acronyms:

ICPP:International Conference on Parallel Processing

ICDCS IDCS DCS:International Conference on Distributed Computer Systems

ISCA:International Symposium on Computer Architecture

MIN:Multistage Interconnection Network

ACM && IEEE/CS:two professional computer societies

ACM: the one with the SIGs, IEEE: the one with the technical commitees

CCC:Cray Computer Corporation (defunct)

CRI:Cray Research Inc. (SGI div.)

CDC:Centers for Disease Control and prevention

Control Data Corporation (defunct)

CDS:Control Data Services

DMM:

DMP:

DMMP DMC: Distributed Memory Multi-Processor/Computer

DMMC:Distributed Memory Multiprocessor Conference (aka Hypercube Conference)

ERA:Engineering Research Associates

ETA:nothing or Engineering Technology Associates (depending who you talk to)

ASC:Texas Instruments Advanced Scientific Computer (real old)

ASCI:Accelerated Strategic Computing Initiative

ASPLOS: Architectural Support for Programming Languages and Operating Systems

IPPS International Parallel Processing Symposium

JPDC Journal of Parallel and Distributed Computing

MIDAS:Don't use. Too many MIDASes in the world.

MIP(S):Meaningless Indicators of Performance, also MFLOPS, GFLOPS, TFLOPS,

PFLOPS, (also substitute IPS and LIPS (logical inferences) for FLOPS

NDA:Non-disclosure Agreement

POPL:Principles of Programming Languages

POPP PPOPP PPoPP: Principles and Practice of Parallel Programming

HPF:High Performance Fortran (a parallel Fortran dialect)

MPI:Message Passing Interface (also see PVM)

PVM:Parallel Virtual Machine (clusters/networks of workstations)

also see MPI

Parallel "shared" Virtual Memory [Not the same as the other PVM]

SC'xx:Supercomputing'xx (a conference, not to be confused with the journal)

SGI:Silicon Graphics, Inc.

SUN:Stanford Unversity Network

SOSP:Symposium on Operating Systems Principals

SPDC:Symposium on Principles of Distributed Computing

SPAA:Symposium on Parallel Algorithms and Architectures

TOC/ToC: IEEE Transactions on Computers

Table of Contents

TOCS: ACM Transactions on Computer Systems

TPDS/PDS: Transactions on Parallel and Distributed Systems,

Partitioned Data Set

TSE: Transactions on Software Engineering

Pascal && Unix:They aren't acronyms.

You can suggest others.....

We have dozens of others, we are not encouraging their use.

This is a list of last resort.

While people use these macros in processors like bibTeX, many interdisciplinary

applications people reading these groups are clueless. USE THE COMPLETE

06

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 7/1/96] parlib/mail daemons/servers archives comp.par (6/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 6 Mar 1998 13:03:10 GMT

Message-ID: <6dos6e$e3s$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 1 Jul 1996

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

Where is the group archived?

============================

Archive: http://www.hensa.ac.uk/parallel/internet /usenet/comp.parallel

See also parlib.

parlib@hubcap.clemson.edu

It's like netlib. To start, send the mail daemon

send index

in with the body or Subject field of the message.

From: Dave Beckett <D.J.Beckett@ukc.ac.uk>

(Dec 1993 till present) available under

http://www.hensa.ac.uk/parallel/internet /usenet/

I also have older articles off-line from Steve Stephenson, from 1990+

but consider them too old to be relevant.

comp.parallel specifically is at

http://www.hensa.ac.uk/parallel/internet /usenet/comp.parallel/articles/

At this time (June 1995), the index (minus the help) is:

bibliographies - Bibliographies available through parlib or ftp.

butterfly Example programs for Butterfly

connection-machine Connection machine examples

cray example programs for various Cray products.

faq Frequently asked questions and (with any luck) answers.

intel860 Sample problems for the Intel i860 systems.

intel860/c Example C program for iPSC/860

intel860/fortran Example Fortran program for iPSC/860

miscellaneous alogorithms and/or programs which don't fit any

other category

mobil donation of codes illustrating the same problem

solved on various architectures.

newsgroup Information regarding the comp.parallel newsgroup

other-servers Other locations specializing is serving special groups.

parallaxis Example programs written in parallaxis.

p4 Example programs written in P4

readings Suggested or required readings in parallel /

distributed processing.

schools Schools and curricula which support parallel

processing work

sisal example problems written in Sisal

salishan Problems and programs from John Feo's book

send index from faq yields:

amdahlslaw - some references to Amdahl's Law

# oh yeah, read about it. Don't just find it. Get the NCC session

# section on it (including Slotnick's rebuttal/commentary).

debugging - some comments on debugging

emailaddress - how to find email addresses

environments - the turcotte report on environments

express - outline of system

forge - outline of system

grandchallenges - a list of the original grand challenges

gccommentary - a discussion of the grand challenges and how the GCs evolve.

linda - Questions about linda may be answered here.

simulators(dir) - some multiprocessor simulation packages

parascope - Rice University's tools for scientific programming.

parasoft - latest version of news letter---info on where to get.

pcnets - networks of pcs: software.

ppc - Information about the Parallel Processing Connection

p4 - information on p4 and how to get the code.

pvm - information on PVM and how to get the code.

this is obsolete

http://www.epm.ornl.gov/pvm/pvm_home.htm l

http://www.netlib.org/pvm3/index.html

http://www.netlib.org/pvm3/faq_html/faq. html

http://www.netlib.org/pvm3/book/node1.ht ml

qvctapes - information about QVC tapes availability.

See also:

http://www.hensa.ac.uk/parallel/

The net does not assure the value or quality of the information posted here.

caveat receptor

The quality is a function of how YOU (and others) participate. If you sit

back passively, you will be disappointed. The group is basically academic

in nature. Expect academic debate (or the more juvenille "flame war")

on some topics. You get what you pay for here. Don't expect technical

advice which you would have thought to pay thousands of dollars (an analysis)

here for free. It doesn't happen.

<URL:http://www.hensa.ac.uk/parallel/faq s/>

I have a huge amount (450M) of parallel computing files and lots of

FAQs, and stuff culled from news articles.

<URL:http://www.hensa.ac.uk/parallel/faq s/>

Experimentally:

<URL:http://asknpac.npac.syr.edu>

or

<URL:http://128.230.144.19>.

Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)

Archive: http://www.hensa.ac.uk/parallel/internet /usenet/comp.parallel

08

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 12/1/97] news group dynamics comp.parallel (8/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 8 Mar 1998 13:03:22 GMT

Message-ID: <6du4uq$ql0$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 1 Dec 1997

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

Why is this news group so quiet?

================================

This is an interesting question. It has several answers. Shortest answer:

Because there are spies reading it. Real ones. Industrial and governmental.

No kidding. Then, there is the Press....

People tend to be very quiet in this group, because there are some rather

sensitive economic stakes (and other kinds as well). Many people asking

tend to be trolling for information. Real information exchange tends to

be better done using email.

Confidential: The level of classification designated by Executive Order 12356

for "information, the unauthorized disclosure of which could be expected to

cause damage to the national security."

Secret: The level of classification designated by Executive Order 12356

for "information, the unauthorized disclosure of which could be expected to

cause serious damage to the national security" [such as in the disruption

of interational relations, or the impairment of the effectiveness of

a program or policy of vital importance].

Top Secret: The level of classification designated by Executive Order 12356

for "information, the unauthorized disclosure of which could be expected to

cause extremely grave damage to the national security" [such as in

time of war or the breaking of diplomatic relations.]

HOMEWORK: (academic students, undergrad, grads, post-grads)

---------------------------------------- -------------------

If you are a student, in particular US/Canadian, or have an instructor,

homework places a quandary on the knowledgeable people in this group.

Students are supposed to do exercises, not have the readers of this group

do exercises for them. Where the line between a homework query ends and

a serious research query begins is sometimes hard to define.

Point the following out to your instructor (this is one of the reasons

why you should read and be aware of news.announce.newusers (or de.newusers

or similar groups))

The best canned answer from another FAQ is:

>1A. What about "I've got a school assignment...."

>

>Recently, I've been made aware of a USENET policy about posting news

>articles requsting info for a school assignment. Michael Chui at Indiana

>University sent me the following info, which I have included

>verbatum and will try to stick to (since I agree with it)

>

>--------begin included text--------

> From Michael Chui mchui@cs.indiana.edu

>

>Excerpt from the Usenet Primer published in the new.* groups

>

> Please do not use Usenet as a resource for homework assignments

>

> Usenet is not a resource for homework or class assignments. A common

> new user reaction to learning of all these people out there holding

> discussions is to view them as a great resource for gathering

> information for reports and papers. Trouble is, after seeing a few

> hundred such requests, most people get tired of them, and won't reply

> anyway. Certainly not in the expected or hoped-for numbers. Posting

> student questionnaires automatically brands you a "newbie" and does not

> usually garner much more than a tiny number of replies. Further,

> some of those replies are likely to be incorrect.

>

> Instead, read the group of interest for a while, and find out what the

> main "threads" are - what are people discussing? Are there any themes

> you can discover? Are there different schools of thought?

>

> Only post something after you've followed the group for a few weeks,

> after you have read the Frequently Asked Questions posting if the group

> has one, and if you still have a question or opinion that others will

> probably find interesting. If you have something interesting to

> contribute, you'll find that you gain almost instant acceptance, and

> your posting will generate a large number of follow-up postings. Use

> these in your research; it is a far more efficient (and accepted) way

> to learn about the group than to follow that first instinct and post a

> simple questionnaire.

>

> Actually, I'm not completely opposed to using the Net as a

>resource for academic research. Being still in academia, I *am*

>irritated by people who want the Net to do their research for them

>(and not just because the results are often inaccurate).

> On the other hand, I'd accept queries like, "I'm researching

>airship mine technology, and in General Napoleon SchwartzRommel's book

>_Boom, Der It Is!_, he makes reference to the GedankenSweeper. I've

>searched my University of Podunk library, but can't find any references

>to the GedankenSweeper. Could someone give some pointers to references

>about the propulsion system in the GedankenSweeper?" I'd like the

>student to show that they've done some work themselves (like go to

>a library) before they send a message to thousands of people.

> It all basically comes down to the oft-repeated Net-reminder

>that "the person on the other side of the message is human." Think of

>the Net as being an expert on, in sci.military's case, military

>technology. Would you walk into a military technology expert's

>office and ask him/her, "Gee, I have this homework assignment to do

>on X. Can you tell me everything you know about this topic?" Worse

>yet, would you do this to thousands of people?

Suggestions: you should point this out to your instructor.

Your instructor should either set guidelines for you or post

to the news group and lay out guidelines which a few of the

members of the news group will attempt to shoot down (or not).

It is general moderator policy to attempt to post anything you send us

RELEVANT to parallel computing (and to the maintenance of news group policy).

Why isn't there more traffic?

=============================

The experienced posters are tired. They understand what's going on.

Part of the problem is you, if you are a new reader.

Your timing is a little unfortunate, you should show that you have done

at least a little homework, etc. They don't want to get flamed.

I speak with Second Degree experience myself.

It used to be possible to get posts/info from Alan Jay Smith about

cache memories from the net.arch group. No more. Those days are past.

Part of the problem is failure to understand and follow netiquette.

10

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 2/27/98] network resources -- comp.parallel (10/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 10 Mar 1998 13:03:19 GMT

Message-ID: <6e3dmn$836$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 20 Jan 1998

10Related news groups, archives, test codes, and other references

12User/developer communities

14References, biblios

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

Related News Groups

-------------------

Child groups:

comp.parallel.pvm (unmoderated)

http://www.epm.ornl.gov/pvm/pvm_home.htm l

http://www.netlib.org/pvm3/index.html

http://www.netlib.org/pvm3/faq_html/faq. html

http://www.netlib.org/pvm3/book/node1.ht ml

ftp://netlib2.cs.utk.edu/pvm3

http://www.nas.nasa.gov/NAS/Tools/Outsid e/

comp.parallel.mpi (unmoderated)

http://www.mcs.anl.gov/mpi

comp.arch (unmoderated) # our parent news group.

comp.arch.arithmetic (unmoderated) # step kids

comp.arch.storage: many news groups discuss parallelism/HPC

comp.os.research (moderated, D. Long, UCSC)

comp.sys.convex (unmoderated)

# I wonder if this will become an h-p group.

comp.sys.alliant (unmoderated)

comp.sys.isis. Isis is a commercial message passing package for

C and Fortran (at least); features fault-tolerance.

# defunct:

# comp.sys.large (unmoderated) # The term "Big iron" is used.

# # more mainframes and distributed networks

comp.sys.super (unmoderated)

comp.sys.transputer (unmoderated) (consider also OCCAM here)

comp.unix.cray (unmoderated)

comp.research.japan (moderated, R.S.,UA)/soc.culture.japan (unmoderated)

sci.math.*

in many different forms, you will even find it in places

like bionet.computational, but it is not the intent of this list to

anywhere near complete. Locate application areas of interest.

comp.benchmark

aus.comp.parallel

fj.comp.parallel (can require 16-bit character support)

alt.folklore.computers: computing history, fat chewing

others

Note: all these news groups are given as options (and other news groups).

Nothing will stop you from posting in this news group on most any topic.

Where are the parallel applications?

------------------------------------

Where are the parallel codes?

-----------------------------

Where can I find parallel benchmarks?

=====================================

High performance computing has important historical roots with some

"sensititivity:"

1) Remember the first computers were used to calculate the trajectory of

artillery shells, crack enemy codes, and figure out how an atomic bomb

would work. You are fooling yourself if you think those applications

have disappeared.

2) The newer users, the simulators and analysts, tend to work for industrial

and economic concerns which are highly competitive with one another.

You are fooling yourself if you think someone is going to just place their

industrial strength code here. Or give it to you.

So where might I find academic benchmarks?

parlib@hubcap.clemson.edu

send index

netlib@ornl.gov

send index from benchmark

nistlib@cmr.ncsl.nist.gov

send index

See also:

Why is this news group so quiet?

Other news groups:

sci.military.moderated

We also tend to have many "chicken-and-egg" problems.

"We need a big computer."

"We can design one for you. Can you give us a sample code?"

"No."

...

New benchmarks might be best requested in various application fields.

sci.aeronuatics (moderated)

sci.geo.petroleum

sci.electronics

sci.physics.*

sci.bio.*

etc.

Be extremely mindful of the sensitive nature of collecting benchmarks.

Obit quips:

MIP: Meaningless Indicators of Performance

Parallel MFLOPS: The "Guaranteed not to exceed speed."

Where can I find machine time/access?

=====================================

Ask the owners of said machines.

What's a parallel computer?

===========================

A bunch of expensive components.

Parallelism is not obvious. If you think it is, I can sell you a bridge.

The terminology is abysmal. Talk to me about Miya's exercise.

the problem is mostly (but not all) in the semantics.

Is parallel computing easier or harder than "normal, serial" programming?

======================================== =================================

Ha. Take your pick. Jones says no harder. Grit and many others say yes

harder. It's subjective. Jones equated programming to also mean

"systems programming."

In 1994, Don Knuth in a "Fire Side Chat" session at a Conference when asked,

(not me):

"Will you write an "Art of Parallel Programming?"

replied:

"No."

Knuth did not.

One group of comp.parallel people hold that parallel algorithm is an oxymoron:

that an algorithm is inherently serial by definition.

How can you scope out a supercomputing/parallel processing firm?

======================================== ========================

Lack of software.

What's your ratio of hardware to software people?

Lack of technical rather than marketing documentation.

When will you have architecture and programming manuals?

Excessive claims about automatic parallelization.

What languages are you targeting?

See Also: What's holding back parallel computer development?

======================================== ==========

"I do not know what the language of the year 2000 will look like

but it will be called FORTRAN."

--Attributed to many people including

Dan McCracken, Seymour Cray, John Backus...

All the Perlis Epigrams on this language:

42. You can measure a programmer's perspective by noting his

attitude on the continuing vitality of FORTRAN.

--Alan Perlis (Epigrams)

70. Over the centuries the Indians developed sign language

for communicating phenomena of interest. Programmers from

different tribes (FORTRAN, LISP, ALGOL, SNOBOL, etc.) could

use one that doesn't require them to carry a blackboard on

their ponies.

--Alan Perlis (Epigrams)

85. Though the Chinese should adore APL, it's FORTRAN they

put their money on.

--Alan Perlis (Epigrams)

See also #68 and #9.

12

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 4/8/97] Who runs the ||ism-comunity? -- comp.parallel (12/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 12 Mar 1998 13:03:08 GMT

Message-ID: <6e8med$leq$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 8 Apr 1997

12Who runs the ||ism-community?

14References

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives, test codes, and other references

Why this panel?

---------------

One man's research is another man's application.

A significant undercurrent belies high performance computing:

The lines of communication between computer system builders and applications

are something like the "war" between men and women.

Alan Turing never met John Gray [Men are from Mars, Women are from Venus].

Users can't understand what's taking so long and why programming is so hard.

Programmers and architects, perenial optimists [except me, the Resident Cynic]

always promise more and tend to deliver late.

This panel needs a lot of work, because I have exposure to a limited set of

communities including three and four letter agencies, people and friends

in the physics, chemistry, biology, and earth science communities in

academica and industry. etc. etc. Add what you want.

How parallel computing is like an elephant (with blindmen)

---------------------------------------- ------------------

Who runs the computer industry?

-------------------------------

A little road map

-----------------

Programmers are from Mars.

Users are from Venus.

--E. Miya, March 1996, DL'96

God didn't have an installed base.

--Datamation ??

This section attempts to cover topics relating to various sub-cultures

in the high-performance computing market. If you don't understand something,

you aren't alone. If you think you understand something, you don't.

These topics are long standing (from net.arch in the pre-comp.parallel

and pre-comp.sys.super days).

If parallel computing is a business, are the customers always right?

Scale

-----

How would you like programs to run twice as fast? How about 10% faster?

Not as impressive? In this group, factors of 2-4 aren't impressive.

>From my knowledge, agencies like the Dept. of Energy (formerly ERDA and AEC),

factors of 8-16 (around 10) interest people. Keyword: EXPECTATION:

At 3 MIPS, the CDC 6600's appearance was 50x faster than its predecessor

the ERA/UNIVAC 1604. WE DO NOT SEE THIS DEGREE OF GAIN CONSISTENTLY.

With that reference, we proceed.

Clearly, smaller numbers (improvements %s or 2-4x) speed ups are useful

to some users, but this is illustrative of the nature of Super-scales.

Traditional computer science teaches about the time-space trade-off

in computation. SUPERCOMPUTING DOESN'T ALLOW TRADEOFFS.

We must distinguish between

ABSOLUTE performance (typically measured by wall-clock)

RELATIVE performance (normalized or scaled percentage (%))

If you have a problem and you can't trade one for the other:

then it MIGHT be a supercomputing problem.

Run time tends to either too small or too slow: then it might not be super

anymore. The definition is a moving wave.

Problem scale: space: problems O(n^2) and O(n^3) are common. O(n^4)

from things like homogeneous coordinate systems are increasingly common.

Remember: parallel computing is an O(n) solution to the above O-notation.

Problem scale: time: O(n^3) and greater thru NP-complexity

One popular line FAQ (sci.math) is proving P == NP complexity.

That won't be covered here. (E.g., the Cray-XXX [choose model number

can execute an infinite loop in 10 seconds [some finite figure].

Yes, people have posted that joke here.)

Processors are typically scaled (added) at O(n) or at best O(n^2).

Any improvements must be viewed realistically, bordering on skeptical.

This is why claims of superlinear speed up (properly super-unitary) should

be viewed with great skepticism. Clearly, people working in this area

need the proverbal pat on the back, but giddy claims only serve to hurt

the field in the long run.

Nothing like showing 2-D computational results,

when end-user customers work on 3-D problems.

Additionally, people tend to assume synchronous systems.

Asynchronous systems are even more "fun."

Let's bring cost into the discussion:

Since the 1970s, it has been realized that complete processor or memory

connectivity (the typical example give was a cross-bar) scales by O(n^2)

interconnections.

Over time, various multistage interconnection networks (MINs)

have scaled this down to variations around O(n ln n)

[interpreted: this is still more than O(n)].

This contrasts with the perceived dropping cost of electronics

(semiconductor substrate). See the Wulf quote about CPUs on an earlier panel.

This is not a problem for small scale (4 to 8-16 processors,

aren't you glad you got those numbers?) on a bus, but investigators

(users) want more power than that. It is VERY hard to justify these

non-linear costs to people like Congress:

"You mean I pay for 8 processors and I get 4x the performance?

You have some serious explaining to do son."

This brings up the superlinear speed up topic (more properly called

"superunitary speed up"). That is another panel.

What are some of the problems?

----------------------------------------

The technical problems interact with some of the economic/political problems.

First comes consistency. You take it for granted. (Determinism)

Say to yourself,

asymmetric

boundary conditions

exception handling

consistency

starvation

dead lock

state of the art limits of semiconductors

If you are not in the computing industry, you might be confused by

time scales. Silicon Valley does not operate on the same wavelengths

or time scales as other parts of the world.

It is estimated that the US Government takes an average of 430 days (1990)

to purchase a supercomputer. The typical memory chip has a useful commercial

life time of two years before it is succeeded by better technology.

Fields like astronomy and glaciology may take a year or two to referee

some papers.

Refereed papers in computing are frequently the exception (obsolete)

and seminars and conferences tend to hold more weight

(including personal followup). The speed at which some ideas are discarded

can be particularly fast.

Many parts of the computer community tend to assume their users' environments

behave very much like their own. This is usually not the case.

This is why I value my contacts outside the computer industry.

A funny relationship exists.

Traditional science has been characterized by theory and experiment.

In the late 1980s, several key Nobel Laureates (remember that there are

no Nobel Prizes for Math or Computing) starting with people

like Ken Wilson and continuing the tradition with Larry Smarr

have argued for a third part: computational science.

The silent majority in many sciences which rebuts sometime saying:

Any field which has to use 'science' in its name isn't one.

[R.P. Feynman Lectures on Computation]

14

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 3/10/98] finding ||-ism references -- comp.parallel (14/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 14 Mar 1998 14:58:07 GMT

Message-ID: <6ee5tv$ed0$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 10 Mar 1998

14References

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives, test codes, and other references

12Who runs the ||ism-comunity?

Like Dickens: this is the best of times, and it's the worst of times.

First, you should always consider commercial services like DIALOG.

They cost money, and they attempt to maintain quality, but you should also

be very wary of these services. I have tested them, and they all lack.

But you have legal recourse.

Second: consider the Web. Quite a few references, mostly newer exist.

Quite impressive if you know how TO PHRASE YOUR QUERY.

Impressive are NCSTRL, Alf's and other sources.

Most recently, the Association for Computing Machinery (ACM: www.acm.org)

has placed many of their proceedings on line with their digital library.

At this time, this service is free, and will likely remain free to

ACM members, BUT it is likely that non-members will have to pay for

use in the future.

Third: Specialization:

Parallel I/O (Dave Kotz) [does a particularly nice job]

------------

http://www.cs.dartmouth.edu/cs_archive/p ario/

Parallel debugging (Cheri Pancake)

------------------

Fragments of these are slowly being incorporated into....

If not specialized (generalist):

Fourth: consider my biblio (NASA.TM-86000). It attempts to be comprehensive.

It's main advantage is the collected set of comments, errata, flames, etc.

(annotations). This is possibly a very nice biblio in many respects.

It helps to have real tools and not just a text editor.

DISADVANTAGES: needs lots of catch up work. Volunteers?

In particular I am seeking a copy of:

%A Ulrike Bernutat-Buchmann

%A Dietmar Rudolph

%A Karl-Heinz ScholBer

%T Parallel Computing I Eine Bibliographie

%I Rechenzentrum und Ruhr-Universitat Bochum

%D September 1983

%K book, text, paper,

%O ISSN: 0723-2187

%X An extremely large printed bibliography on the subject.

It is probably in a machine readable form.

It has over 5000 entries, many in European languages.

Should try to merge it with this list. It does not appear

to have annotations, does have a cross reference list, does have keywords

but they are not printed.

ADVANTAGES: Free. If a reference isn't inside my biblio (especially older),

then it might be questionable. Ask. I might not have it, or

I might not yet have integrated inside. The idea behind my biblio

is to be able to

1) locate useful information,

2) not merely cite sources,

3) reformat information as needed,

4) have useful assessments, errata, etc.

to steer clear of less and useful information, but this can create conflicts.

Other biblios posted to the net on topics like load balancing, neural nets,

APL, etc. have been reformatted and incorporated as time permits.

The purpose of my biblio is to use it. It's way beyond promotion time.

My biblio is not meant as advertising, yet it can be construded to

a limited degree. Where it differs from commercial services is that

ANYONE can provide an intelligent comment,

I'll even take semi-intelligent comment (you can express your opinion).

ANYONE CAN SUGGEST A COMMENT, AND NO OTHER BIBLIO ALLOWS THAT.

Some of the best computer people in the world have commented inside

this biblio.

Formats:

refer

Slowly disappearing.

Advantages: can be used for reformating as well as search.

bibTeX

Advantages: powerful, can be used with reformatting.

Disadvantages: some people give references w/o giving Macros.

This can really suck (a real pisser). Bulky.

Scribe

Slowly disappearing.

Advantages: can be used for reformating.

Script

Slowly disappearing.

Advantages: can be used for reformating.

Z39.50 and Dublin Core

Watch for these.

Disadvantages: must be reformatted.

InterBib

Too early to tell.

Bibliographic citation:

Largely irrelevant, the field either doesn't care or has minor significance.

The field is diverse: some areas sensitive, other areas not.

Potential land mines:

Authors by last name alphabetic order

Authors by order of importance to work

Authors by first name initials

Authors by full name

Dennis Allison (Stanford and HaL) informs me that Satya got the copyright back

and we have redistribution authority.

The parallel/distributed processing bibliography (in machine readable

form) is documented in ACM CAN:

%A E. N. Miya

%T Multiprocessor/Distributed Processing Bibliography

%J Computer Architecture News

%I ACM SIGARCH

%V 13

%N 1

%D March 1985

%P 27-29

%K Annotated bibliography, computer system architecture, multicomputers,

multiprocessor software, networks, operating systems, parallel processing,

parallel algorithms, programming languages, supercomputers,

vector processing, cellular automata, fault-tolerant computers,

some digital optical computing, some neural networks, simulated annealing,

concurrent, communications, interconnection,

%X Notice of this work. Itself. Quality: no comment.

Also short note published in NASA Tech Briefs vol. 12, no. 2, Feb. 1988,

pp. 62. Also referenced in Hennessy & Patterson pages 589-590.

About an earlier unmaintained version. TM-86000 and ARC-11568.

Maintaining for ten years with constant updates (trying to be complete

but not succeeding). Limited verification against bibliographic systems

(this is better than DIALOG). Storing comments from colleagues

(DIALOG can't do this.) Rehash sections on a Sequent as a test of parallel

search (this work exhibits unitary speed-up). 8^).

The attempt is to collect respected comments as well as references.

Yearly net posting results hopefully updated "grequired" and "grecommended"

search fields. Attempted to be comprehensive up to 1989.

$Revision:$ $Date:$

It began with a bibliography published in 1980 by

%A M. Satyanarayanan

%T Multiprocessing: an annotated bibliography

%J Computer

%V 13

%N 5

%D May 1980

%P 101-116

%X Excellent reference source, but dated.

Text reproduced with the permission of Prentice-Hall \(co 1980.

$Revision: 1.2 $ $Date: 84/07/05 16:58:56 $

My work is considerably larger (over 100+ times now).

# Next three lines to be removed shortly:

16

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m xx/xx/xx] comp.par/comp.sys.super (16/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 16 Mar 1998 13:03:07 GMT

Message-ID: <6ej7ub$29j$1@cnn.nas.nasa.gov>

Archive-Name:

Last-modified:

This space intentionally left blank (temporarily).

18

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 2/27/98] What *IS* a super? comp.par/comp.sys.super (18/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 18 Mar 1998 13:03:14 GMT

Message-ID: <6eogmi$ok$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 29 Jan 1998

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

Not heard in these parts:

This computer is good for {select one: 51%, 90% 99%}

of your needs.

What's constitutes a supercomputer?

-----------------------------------

What makes a supercomputer?

===========================

The fastest, most powerful machine to solve a problem today.

Generally credited to Sid Fernbach and George Michael and others

What if I qualify that with "cost?" ["for the cheapest"]

---------------------------------------- ----------------

Then, it's not a supercomputer. Period.

It might be a minisupercomputer, though.

Don't let George know that I said that (he's much more hardline).

Other answers

-------------

0) A Japanese company. ;-)

1) "My definition is 'best,'

"A Supercomputer is the one that runs your problem(s) the fastest.""

2) "A supercomputer is a device for converting a CPU-bound problem

into an I/O bound problem." [Ken Batcher]

3) "A supercomputer is one that is only one generation behind what

you really need." Neil Lincoln's definition.

3a) "Hardware above and beyond, software behind and below"

3b) A machine to solve yesterday's problems at today's speeds.

4) Page _one_ of the Linpack-report...

What is Linpack? (LINPACK)

----------------

Linpack100x100 - All Fortran, dominated by daxpy unless advanced compiler

optimizations are available. Seldom quoted in marketing literature

because the performance is much lower than the following two.

However, Dongarra sorts his chart by machine performance on this benchmark.

Linpack1000x1000 - Typically Vendor Library routines which use BLAS3 or

LAPACK routines (N**2 data refs for N**3 operations)

Shows single processors with high floating point capacity in favorable

light, so often quoted in marketing literature.

Linpack NxN - problem size determined by Vendor, good for parallel machines

since with correct choice of problem size can maximize the computation

per communication step. Often quoted in marketing literature for

the larger parallel systems.

"A supercomputer is a machine which costs between $7M and $20M.

[~1984 prices].

[Today, I guess you could change the range to $10M-$30M or so (how much is a

full-up T-90 go for at the usual discount?]

For some strange and mysterious reason, this really used

to bug people who wanted to believe that "supercomputers"

had a kind of magical, mystical aura. For some reason,

the same folks would get mad when, by the numbers, their

PC's were about ~1/1,000,000 of the then-current Cray & CDC -

they also wanted to "believe in" their PCs. My puzzlement

over this double denial is probably why I am not a successful politician.

--Hugh LeMaster

See also Grand Challenges panel.

----------------------

Where do the terms minisupercomputer and Crayette come from?

======================================== ====================

Convex Computer Corp. coined the term minisupercomputer and that

has largely stuck even though they consider themselves now a full fledged

super computer company. "Crayette" came from Datamation for SCS,

because SCS had an Cray/COS object code compatible X-MP machine at

a fraction of the cost/performance.

Crayisms

========

The news group has covered a variety of Crayisms or sayings (some are

apoc.*ful).

%A Russell Mitchell

%T The Genius: Meet Seymour Cray, Father of the Supercomputer

%J Business Week

%N 3157

%D April 30, 1990

%P 80-86

%K Cover story, biography, circular slide rule, Cray-3,

%X Text of this article is available via Dialog(R) from McGraw-Hill

0210276

Some of these are Rollwagon-isms.

On Schedules and bureaucracy:

"Five Year Goal: Build the biggest computer in the world.

One Year Goal: Achieve one-fifth of the above."

On 2s-complement arithmetic.

'Although many "Seymour stories" are based in fact,

most are wildy exaggerated:'

On digging holes (tunnels): a 12-foot hole for wind surfing gear.

On burning boats (Rollwagen: made up the party and Carolyn Cray Bain:

"it was the easiest way to get rid of a boat").

Virtual Memory (compared with sex).

"Memory is like an orgasm - it's better when you don't have to fake it."

"You can't fake what you don't have".

"Can't use what you ain't got!"

"In this business, you can't fake what you don't have"

[Gee, I guess this quote makes this FAQ R-rated.]

On wood paneling.

I hear Seymour Cray designs machines on his Apple MacIntosh.

And that Apple designs MacIntoshes on their Cray.

%A Marcelo A. Gumucio

%T CRI Corporate Report

%J Cray User Group 1988 Spring Proceedings

%C Minneapolis, MN

%D 1988

%P 23-28

%K 21st Meeting

%X Seymour has 6 Apple Macs (Macintosh) used to design Crays (not just one).

Q&A section.

[Gordon Bell {See the IBM panel} admits he designs his computers on Macs, too.]

Alas, this is getting old. Seymour died.

Apple is only using their EL as a file server.

We have also covered the parity quote (panel 10).

1) Mr. Cray had always worked with core (yes Virginia, little ferrite

toruses with wires hand threaded through them). Core memory was rock stable

& almost *never* failed. My RCA 70/45 crashed 3 times in 4 years with

memory parity errors and one of those crashes was due to a friend hitting

the A/C Power Emercency Off button on the console!

2) When he designed the first Cray-1, s/n-1, Mr. Cray used RAM chips with

straight parity. The system was installed at the Los Alamos National

Laboratory. It averaged 20 minutes of blinding speed per system failure

(due to a parity error in memory). This was obviously a problem, so, after

consulting with the LANL folks ...

3) Development was halted on s/n-2. The next machine, s/n-3, was designed

with Single Error Correction - Double Error Detection (SECDED) parity in

it's memory. This machine was sold to the National Center for Atmospheric

Research (NCAR) where it ran (with very few double-bit error crashes) for

many years. An aside here is that NCAR had the absolute audacity to

require that an Operating System come with the system, so Cray hired a

(shudder) programmer to write one!

4) Note that this is memory! The Cray-1 line had SECDED memory. No parity

checking was done in the CPU. The same was done for the X-MP. The Y-MP

extended parity checking to the CPU. ...

Cray and new ideas (non-cray)

-----------------------------

The story frequently goes:

A bright student or architect somehow manages to get time to visit

Seymour. Cray will listen to that student's ideas and nod understanding

or disagreement. He listens to a few ideas, but he makes a comment like

"Sounds good."

20

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 1/2/98] IBM and Amdahl -- comp.parallel (20/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 20 Feb 1998 13:03:20 GMT

Message-ID: <6cjuuo$1n6$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 2 Jan 1998

20IBM and Amdahl (the man, the article)

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

Keywords and phrases:

Watson memo, marketing, COBOL, mythology, aspiring blue boxes, DEC,

laws: Amdahl, other,

challenge, debates, prizes: Karp, Bell, Hillis

Is a supercomputer, a mainframe?

================================

Short answer: yes.

Are all mainframes supercomputers?

================================

Short answer: No.

Think subsets.

Is a cluster a mainframe?

-------------------------

Maybe not. (No one ever promised you yes or no answers.)

Why do people dump on IBM in this news group?

======================================== =====

We'll get to the good positive stuff in a moment, but first the chronology:

IBM is a late entry into the supercomputing market.

It's lateness, it's aloofness, and a certain degree of sales arrogance

has turned the readership of this news group off. You merely need to

assert IBM's superiority, and you will find out. Actually, you might

get dead silence, because the long timers in this group don't care much

anymore. It's historic. BASICALLY: don't worry about it.

A few people will say that IBM claims credit for inventing everything.

These people had not heard of the IBM VAMP.

Not to completely dump on IBM: IBM does make important peripheral technology.

"Some vice presidents of IBM assert that the speed of light goes just

a little bit faster in Armonk." --An IBM Vice President [yes, it's humor]

Anonymous contribution:

I was in White Plains, and I heard their biggy---the chemist---say

"Supercomputing is just a marketing word."

This is, in fact, also the title of a paper. It's these kinds of comments

which will continue to plague IBM. The phrase does have a little truth to it.

It's also deriviative of Watson's comments about computers in general

from the late 1940s. This group keeps a copy of the texts of Watson's memo

about the performance of the CDC 6600 [then the most powerful supercomputer].

%A Ad Emmen

%A Jaap Hollenberg

%T Supercomputer is just an advertisement word, an interview with

Enrico Clementi

%J Supercomputer

%C Amsterdam

%D July-September, 1986

%P 24-33

%X Never very technical, but interesting reading.

Don't forget that "B" stands for "Business" machines and many people

regard this "marketplace" as outside business machines. The reality is

that the SP2 and many other IBM architectures aren't IBM-370 clones.

The problem created by a comment like

"Supercomputing is just a marketing word"

is that potential customers have a hard time justifying

supercomputer purchases internally.

If you think we are rubbing IBM's face in dirt, we aren't.

We are listing a history based on net discussion. Does IBM have

a supercomputer (currently)? Depends on your perspective.

See: definition of a supercomputer on the earliest panels.

On the other hand, it could be argued that IBM has snubbed potential customers.

Touche.

The following stuff is included because 1) it's frequently mentioned,

2) it's frequently quoted out of context, incomplete, etc. These are

from the published documents:

MEMORANDUMAugust 28, 1963

Memorandum To: Messrs. A. L. Williams

T. V. Learson

H. W. Miller, Jr.

E. R. Piore

O. M. Scott

M. B. Smith

A. K. Watson

Last week CDC had a press conference during which they officially

announced their 6600 system. I understand that in the laboratory

developing this system there are only 34 people, "including the

janitor." Of these, 14 are engineers and 4 are programmers, and

only one person has a PhD., a relatively junior programmer. To

the outsider, the laboratory appeared to be cost conscious, hard

working, and highly motivated.

Contrasting this modest effort with our own vast development

activities, I fail to understand why we have lost our industry

leadership by letting someone else offer the world's most

powerful computer. At Jenny Lake, I think top priority should

be given to a discussion as to what we are doing wrong and how we

should go about changing it immediately.

T. J. Watson, Jr.

TJW,Jr:jmc

cc: Mr. W. B. McWhirter

Reproduced in A Few Good Men from Univac.

On hearing about this memo:

"It seems Mr. Watson has answered his own question."

--Seymour Cray

# http://cip2.e-technik.uni-erlangen.de:80 80/hyplan/dl4rcg/texte/quotes.col

#This link now appears to be dead.

#I'll remove this link if a suitable replacement is not found in

#a few months

What is the significance of this memo?

--------------------------------------

1) As pointed out by the book "A Few Good Men from Univac:"

The bureaucratic overhead is less in a small intimate organization.

[i.e., communication requiring complete connectivity is roughly an O(n^2)

problem.]

2) It pulls a little bit of a slap at education (PhD). [The presence of

Woz, Jobs, and Gates is some ways makes this less impressive (all rich

men lacking more than two years of college or after making their fortune,

none building supers of course).]

COBOL?

------

People have on occasion asked for a COBOL compiler in this group.

Comp.lang.cobol is a better place to ask.

COBOL-X: mentioned in the STAR-OS manual.

It is difficult to assess the role of the IBM clones: Amdahl and in particular

the major Japanese vendors: Fujitsu, NEC, and Hitachi. Many exist.

IBM will likely continue to assert itself as a "player" in this market.

What does IBM imply?

--------------------

32-bit, byte-oriented (big-endian vs. little-endian debate),

CISC architecture, EBCDIC characters, radix-50 floating point.

IBM Channel I/O rates (3.3-4.3 MB/S).

or

PCs.

Intel or RS6K processor.

I/O: SCSI-Bus, Micro-Channel.

It is easy to challenge this, but it is even easier to confirm these.

See: definition of a supercomputer on the earliest panels.

Are these "super" features?

---------------------------

Generally no.

It should be noted that some communities can do 32-bit work

(fixed-point, image processing is typical). The question then to ask is:

Does 32-bit mode run twice as fast as 64-bit? What happens with

DOUBLE PRECISION declarations? What happens with integers?

Good questions. 8^)

IBM Positives

-------------

The SP/2:

http://ibm.tc.cornell.edu/ibm/pps/doc/

http://lscftp.kgn.ibm.com/pps/vibm/index .html

http://www.tc.cornell.edu/~jeg/config.ht ml

22

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 3/24/97] Grand challenges and HPCC comp.parallel (22/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 22 May 1998 12:03:08 GMT

Message-ID: <6k3phs$ebr$1@sun500.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 24 Mar 1997

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

What is the list of "Grand Challenge" problems?

======================================== =======

Steve Stevenson:

See the blue book.

And people say that I am cynical.

[Sorry Steve, I have run out of tee-shirts.]

NOTE: This list is updated every year. The most recent version can be

obtained from the National Science Foundation through pubs@note.nsf.gov.

%Q Executive Office of the President, Office of Science and Technology Policy

%T The Federal High Performance Computing Program

%D Sept. 1989

%X Appendix A Summary, pages 49-50.

Prediction of weather, climate, and global change

Challenges in materials sciences

Semiconductor design

Superconductivity

Structural biology

Design of pharmaceutical drugs

Human genome

Quantum Chromodynamics

Astronomy

Challenges in Transportation

Vehicle Signature

Turbulence

Vehicle dynamics

Nuclear fusion

Efficiency of combusion systems

Enhanced oil and gas recovery

Computational ocean sciences

Speech

Vision

Undersea surveillance for ASW

What is ASW?

============

Anti-Submarine Warfare.

What constitutes a "Grand Challenge" problem?

======================================== =====

"A _grand_challenge_ is a fundamental problem in science or

engineering, with broad applications, whose solution would be

enabled by the application of high performance computing resources

that could become available in the near future. Examples of grand

challenges are:

(1) Computational fluid dynamics for

the design of hypersonic aircraft,

efficient automobile bodies, and

extremely quiet submarines,

for weather forecasting for

short and

long term effects,

efficient recovery of oil, and for many other applications;

(2) Electronic structure calculations for the design of new materials such as

chemical catalysts,

immunological agents, and

superconductors;

(3) Plasma dynamics for fusion energy technology and

for safe and efficient military technology;

(4) Calculations to understand the fundamental nature of matter,

including quantum chromodynamics and condensed matter theory;

(5) Symbolic computations including

speech recognition,

computer vision,

natural language understanding,

automated reasoning, and

tools for

design,

manufacturing,

and simulation of complex systems."

"A Research and Development Strategy for High Performance Computing"

Executive Office of the President

Office of Science and Technology Policy

November 20, 1987

Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)

Archive: http://www.hensa.ac.uk/parallel/internet /usenet/comp.parallel

24

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 7/23/97] Suggested readingscomp.par/comp.sys.super (24/28) FAQ

Keywords: REQ,

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 24 May 1998 12:03:07 GMT

Message-ID: <6k929r$7ja$1@sun500.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 23 Jul 1997

24Suggested (required) readings< * this panel * >

26Dead computer architecture society

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

So you didn't search TM-86000? (panel 14).

Here's the context: this is more parallel (rather than super) computing

oriented.

Every calendar year, I ask in comp.parallel for everyone's opinions

on what should people be reading. I couch this with the proviso that

the reader be at least a 1st or 2nd year grad student in computer science

or related technical field. This presumes some basic ACM CORE curriculum

knowledge like:

basic computer architecture,

compilers,

operating systems, and some numerical analysis

(some would argue: not enough, but that's a separate argument).

For better or worse, it's done numerically (a mid 1980s experiment).

Every suggester gets "10 votes."

You will see the 10 perceived "REQUIRED" readings in parallel computing

by your colleagues: and they are very good colleagues like JH and DP, DH, etc.

Disadvantages:

1) sometimes 10 votes is not enough (I made the rules, I can make

exceptions).

2) new unfamiliar books tend to take time to make it to "the top-10."

Yes, some references might be old, so vote for newer references

and encourage your colleagues to "vote" for those references, too.

3) for those we have a RECOMMENDED 100 (for recommended class

reading lists). Search (panel 14 in TM-86000) and find them.

I might make a separate FAQ panel later. Ten is enough for now.

Some people will claim "anti-votes." Sorry I have no provision for anti-votes

except to note them in annotations. Watch for them!

And if you have voted in the past and wish to change your "vote,"

just ask.

We are not doing this to sell textbooks. This is merely a yearly opinion

survey. You can suggest 10 at just about anytime (especially if you want to

N an existing endorsement, or anti, or whatever).

COME ON COME! you are long winded

-------------

Here:

REQUIRED

%A George S. Almasi

%A Allan Gottlieb

%T Highly Parallel Computing, 2nd ed.

%I Benjamin/Cummings division of Addison Wesley Inc.

%D 1994

%K ISBN 0-8053-0443-6

%K ISBN # 0-8053-0177-1, book, text, Ultracomputer, grequired96, 91,

%d 1st edition, 1989

%K enm, cb@uk, ag, jlh, dp, gl, dar, dfk, a(umn),

%$ $36.95

%X This is a kinda neat book. There are special net antecdotes

which make this interesting.

%X Oh, there are a few significant typos: LINPAK is really LINPACK. Etc.

These were fixed in the second edition.

%X It's cheesy in places and the typography is

pitiful, but it's still the best survey of parallel processing. We really

need a Hennessy and Patterson for parallel processing.

(The topography was much improved in the second edition so much of

the cheesy flavor is gone --ag.)

%X (JLH & DP) The authors discuss the basic foundations, applications,

programming models, language and operating system issues and a wide

variety of architectural approaches. The discussions of parallel

architectures include a section that describes the key concepts within

a particular approach.

%X Very broad coverage of architecture, languages, background theory,

software, etc. Not really a book on programming, of course, but

certainly a good book otherwise.

%X Top-10 required reading in computer architecture to Dave Patterson.

%X It is hardware oriented, but makes some useful comments on programming.

%A Michael Wolfe

%T Optimizing Supercompilers for Supercomputers

%S Pitman Research Monographs in Parallel and Distributed Computing

%I MIT

%C Cambridge, MA

%D 1989

%d October 1982

%r Ph. D. Dissertation

%K parallelization, compiler, summary,

%K book, text,

%K grequired91/3,

%K cbuk, dmp, lls, +6 c.compilers,

%K Recursion removal and parallel code

%X Good technical intro to dependence analysis, based on Wolfe's PhD Thesis.

%X This dissertation was re-issued in 1989 by MIT under it's Pittman

parallel processing series.

%X ...synchronization and locking instructions when compiling the

parallel procedures and those called by them. This is a bit like

the 'random synchronization' method described by Wolfe but

works with pointer-based datastructures rather than array elements.

%X Cited Chapters:

Data Dependence 11-57

Structure of a Supercomplier 214-218

%A W. Daniel Hillis

%A Guy. L. Steele, Jr.

%Z Thinking Machines Corp.

%T Data Parallel Algorithms

%J Communications of the ACM

%V 29

%N 12

%D December 1986

%P 1170-1183

%r DP86-2

%K Special issue on parallel processing,

grequired97: enm, hcc, dmp, jlh, dp, jwvz, sm,

CR Categories and Subject Descriptors: B.2.1 [Arithmetic and Logic Structures]:

Design Styles - parallel; C.1.2 [Processor Architectures]:

Multiple Data Stream Architectures (Multiprocessors) - parallel processors;

D.1.3 [Programming Techniques] Concurrent Programming;

D.3.3 [Programming Languages] Language Constructs -

concurrent programming structures: E.2 [Data Storage Representations]:

linked representations; F.1.2 [Computation by Abstract Devices]:

Modes of Computation - parallelism; G.1.0 [Numerical Analysis]

General- parallel algorithms,

General Terms: Algorithms

Additional Key Words and Phrases: Combinator reduction, combinators,

Connection Machine computer system, log-linked lists, parallel prefix,

SIMD, sorting, Ultracomputer

%K Rhighnam, algorithms, analysis, Connection Machine, programming, SIMD, CM,

%X (JLH & DP) Discusses the challenges and approaches for programming an SIMD

like the Connection Machine.

%A C. L. Seitz

%T The Cosmic Cube

%J Communications of the ACM

%V 28

%N 1

%D January 1985

%P 22-33

%r Hm83

%d June 1984

%K grequired91: enm, dmp, jlh, dp, j-lb, jwvz,

Rcccp, Rhighnam,

%K CR Categories and Subject Descriptors: C.1.2 [Processor Architectures]:

Multiple Data Stream Architectures (Multiprocessors);

C.5.4 [Computer System Implementation]: VLSI Systems;

D.1.2 [Programming Techniques]: Concurrent Programming;

D.4.1 [Operating Systems]: Process Management

General terms: Algorithms, Design, Experimentation

Additional Key Words and Phrases: highly concurrent computing,

message-passing architectures, message-based operating systems,

process programming, object-oriented programming, VLSI systems,

homogeneous machine, hypercube, C^3P,

%X Excellent survey of this project.

Reproduced in "Parallel Computing: Theory and Comparisons,"

by G. Jack Lipovski and Miroslaw Malek,

Wiley-Interscience, New York, 1987, pp. 295-311, appendix E.

%X * Brief survey of the cosmic cube, and its hardware

26

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m xx/xx/xx] Dead Comp. Arch. Societyc.par/c.s.super (26/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 26 May 1998 12:03:05 GMT

Message-ID: <6keb1p$1sj$1@sun500.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 30 Apr 1998

26Dead computer architecture society< * This Panel * >

27Special call

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

This space intentionally left blank (temporarily).

UNDERDEVELOPMENT

This is a roughly chronological list of past supercomputer, parallel computer,

or especially "interesting" architectures, not paper designs (See panel 14,

for references for those). Computer archeology is important

(not merely interesting), because it is the failed projects where real

learning takes place. Even Seymour Cray designed "failed" machines.

DCAS came from a so-so Robin Williams movie: Dead Poets Society (DPS)

which nerd CS students went to see (trust me, he's better in live performance).

In turn, the dead-architecture, lessons-learned discussion started in

comp.arch later that same year. The idea was to collect material from

knowledgeable ex-engineers and former scientists, anonymously if need be,

before it was lost (since the company had either died or evolved).

The problem is that academic and commercial literature is fraught with

all kinds of useless glowing marketing/sales language. We (the net,

I didn't do this alone) collected comments anonymously (if need be)

to prevent lessons being lost. The idea was that anyone could comment.

It was that netters had hashed over this material before so many times,

it seemed useful to capture it (like an FAQ ;^). We assembled a

list of architectures.

Maybe, a third the way through the list, I was asked by certain people

with CRI to suspend discussion, because CRI was starting to acquire

Supertek (which I personally always thought was a mistake).

We never resumed. We lost the inertia.

Ever hear of the Gibbs Project?

If not: you should not be surprised.

Around that same time, ASPLOS came to Santa Clara, where they held a

Dead Computer Architecture Society panel session. I had a meeting of

some sort (possibly SIGGRAPH) and I missed the starting hour.

I gave Peter Capek of IBM TJW a video camera, but I did not keep the tape

because I merely wanted to see what I had missed

(if I had, I would have given it to J. Fisher who sat on the panel).

I did not regard that as recording history.

The panel session discussed the various failed minisupercomputer firms

(perhaps I should use more flowery marketing language like

"attempted?"). Either way, lessons were there in front of 200+ architects,

OS and language designers. Perhaps there was another video camera

in the room.....

Let see what were the four architectures represented?

Elxsi

...

Multiflow

...

One poster has mentioned "Why no mention of the Symbolics 3600, LMI, or TI

LISP machines?" I am not adverse to including the lessons from those machines,

however, the DCAS discussion was about minisupercomputers. The 3600 and other

LISP machines fell more into the class of workstations during their time

competing with the Xerox "D-machines" [Dorado, Dolphin, and Dandelion],

SUN, SGI, VAXStation, etc. Most at the time were not even parallel machines.

But if you can pitch me a good case, I'll consider them. Do it.

Also useful:

old header files for those systems which ran C compilers.

Most recently, I am reminded of a warm fall Saturday morning in a house

on a hill overlooking the beautiful Santa Barbara Channel.

George Michael, who I drove just to see Glen Culler (who had suffered

a stroke some time back), was talking about "war stories,"

Ms. Culler [wife and David's mother] chimed in:

"I really think you need a better title for your book

{one GAM was working on}. No one will buy it with a word like

'war stories' in the title...."

Three of us in the room chuckled. She is great.

The Dead Computer Architecture Society

======================================

Floating Point Systems (FPS)

----------------------------

(Purchased by Cray Research)

FPS AP-series (Culler based design with VLIW attributes)

7600 performance attached to your PDP-11.

Roots with Culler-Harris (CHI), Inc. FPS started with specialized

attached processors FPS AP-120B, and scaled from there

to the FPS T-series Hypercubes. The AP-120 line could be attached

to machines as small as a PDP-11. They were controlled by specialized

Fortran (and later C) system calls (a software emulator existed for code

development: obviously slow). Known as an FFT and MXM box.

It was marketed in 1977 in Scientific American as 7600 power on

your minicomputer and showed quite respectable, but economical,

number crunching power (I/O was still a problem).

38-bit words. Pipelined, precursor to VLIW? Perhaps.

Later models: FPS-164, FPS-264, FPS-500, APP

Larger 64-bit attached processors. Pre-IEEE-FP. Attached processors

became useful and popular for signal processing, medical apps.

FPS T-series (hypercubes)

Someone else (maybe Stevenson) can write a T-series paragraph.

Absorbed by Cray Research.

This business unit sold by SGI to Sun at time of SGI/Cray merger, 7/96.

[Current living incarnation.] The former CS-6400 line:

Current living incarnation is the UltraEnterprise 10000 and UltraHPC 10000

(2 different names for 2 different markets, same box).

Denelcor

--------

The Denelcor Heterogeneous Element Processor (HEP) was perhaps the most

unusual architecture a student will never get a chance to see.

My first knowledge of this machine came from Mike Muuss (BRL, scheduled

to get one [4 PEMs delivered]) at a time when the DEC VAX-11/780 was the only

VAX around. Later I would invite representatives to Ames.

7600-class scalar CPUs at a time when the Cray-1 was out

and the X-MP was just being delivered. 64-bit machine.

1978-1984.

Full-Empty bits on the memory, goes way beyond mere Test-and-Set

instructions.

Separate physical Instruction (128MB) and Data (1 GB) Memory

based in Aurora, CO, East of Denver.

Operating systems: HEP-OS and HEP Unix.

Programming and architecture manuals at the Museum.

Keywords: dataflow (limited),

13 systems delivered. Photos.

Sites (Messina list, 13 sites):

BRL (only 4 PEMs)

Argonne

LANL

GIT

XXX (probably)

Luftansa

7 to go.

Problems: somewhat underpowered at the time, programming difficulties.

Hardware deadlock. Early inexperience with serious parallel systems.

Software. Ambitious. Pipelining. Dataflow.

28

Newsgroups: comp.parallel,comp.sys.super

From: eugene@sally.nas.nasa.gov (Eugene N. Miya)

Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)

Subject: [l/m 3/4/96] Dedications comp.parallel (28/28) FAQ

Organization: NASA Ames Research Center, Moffett Field, CA

Date: 28 Feb 1998 13:03:15 GMT

Message-ID: <6d91uj$pr2$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq

Last-modified: 4 Mar 1996

28Dedications

2Introduction and Table of Contents and justification

4Comp.parallel news group history

6parlib

8comp.parallel group dynamics

10Related news groups, archives and references

12

14

16

18Supercomputing and Crayisms

20IBM and Amdahl

22Grand challenges and HPCC

24Suggested (required) readings

26Dead computer architecture society

Dedications

===========

This FAQ is dedicated to the memory of Dr. Sidney (Sid) Fernbach (CDC/LLL)

and to George Michael (LLNL, still alive), perhaps two of the men most

responsible for coining the term "supercomputer."

They were not only responsible for helping promote and

fund supercomputing and parallel computing, but they also

took much flak. They also believed in interactive computing at a time

when the world was batch-oriented.

To Sid:

I will always get a chuckle when reminded that you felt that the

Cray Time Sharing System (CTSS, as distinct from the

Cambridge Time Sharing System or the Compatible Time Sharing System)

would have made a good VAX operating system along with the Trix editor.

Sid passed away on the day he was to speak at a local ACM SIGBIG meeting.

The VMS people would have had a great fit.

To George with whom I used to occasionally share an office one day a week

here at Ames: thanks for the stimulating discussion and even bowing to

the new generation for computists. Your contribution is undervalued by many.

All of your friends remember you.

Additional, special mention:

Seymour Cray is unquestionably credited with making "supercomputer"

a household word (which was apparently missed by some people in the mid-1980s).

To quote Nolan Bushnell, Cray embodies:

Remember engineers drive the boat.

We are here because of Seymour and others.

And Seymour is here because of Jim Thornton.

Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)

Archive: http://www.hensa.ac.uk/parallel/internet /usenet/comp.parallel

Прочти меня!!!

Файл скачан с сайта StudIzba.com

При копировании или цитировании материалов на других сайтах обязательно используйте ссылку на источник

Картинка-подпись
Хочешь зарабатывать на СтудИзбе больше 10к рублей в месяц? Научу бесплатно!
Начать зарабатывать

Комментарии

Поделитесь ссылкой:
Рейтинг-
0
0
0
0
0
Поделитесь ссылкой:
Сопутствующие материалы
Свежие статьи
Популярно сейчас
Зачем заказывать выполнение своего задания, если оно уже было выполнено много много раз? Его можно просто купить или даже скачать бесплатно на СтудИзбе. Найдите нужный учебный материал у нас!
Ответы на популярные вопросы
Да! Наши авторы собирают и выкладывают те работы, которые сдаются в Вашем учебном заведении ежегодно и уже проверены преподавателями.
Да! У нас любой человек может выложить любую учебную работу и зарабатывать на её продажах! Но каждый учебный материал публикуется только после тщательной проверки администрацией.
Вернём деньги! А если быть более точными, то автору даётся немного времени на исправление, а если не исправит или выйдет время, то вернём деньги в полном объёме!
Да! На равне с готовыми студенческими работами у нас продаются услуги. Цены на услуги видны сразу, то есть Вам нужно только указать параметры и сразу можно оплачивать.
Отзывы студентов
Ставлю 10/10
Все нравится, очень удобный сайт, помогает в учебе. Кроме этого, можно заработать самому, выставляя готовые учебные материалы на продажу здесь. Рейтинги и отзывы на преподавателей очень помогают сориентироваться в начале нового семестра. Спасибо за такую функцию. Ставлю максимальную оценку.
Лучшая платформа для успешной сдачи сессии
Познакомился со СтудИзбой благодаря своему другу, очень нравится интерфейс, количество доступных файлов, цена, в общем, все прекрасно. Даже сам продаю какие-то свои работы.
Студизба ван лав ❤
Очень офигенный сайт для студентов. Много полезных учебных материалов. Пользуюсь студизбой с октября 2021 года. Серьёзных нареканий нет. Хотелось бы, что бы ввели подписочную модель и сделали материалы дешевле 300 рублей в рамках подписки бесплатными.
Отличный сайт
Лично меня всё устраивает - и покупка, и продажа; и цены, и возможность предпросмотра куска файла, и обилие бесплатных файлов (в подборках по авторам, читай, ВУЗам и факультетам). Есть определённые баги, но всё решаемо, да и администраторы реагируют в течение суток.
Маленький отзыв о большом помощнике!
Студизба спасает в те моменты, когда сроки горят, а работ накопилось достаточно. Довольно удобный сайт с простой навигацией и огромным количеством материалов.
Студ. Изба как крупнейший сборник работ для студентов
Тут дофига бывает всего полезного. Печально, что бывают предметы по которым даже одного бесплатного решения нет, но это скорее вопрос к студентам. В остальном всё здорово.
Спасательный островок
Если уже не успеваешь разобраться или застрял на каком-то задание поможет тебе быстро и недорого решить твою проблему.
Всё и так отлично
Всё очень удобно. Особенно круто, что есть система бонусов и можно выводить остатки денег. Очень много качественных бесплатных файлов.
Отзыв о системе "Студизба"
Отличная платформа для распространения работ, востребованных студентами. Хорошо налаженная и качественная работа сайта, огромная база заданий и аудитория.
Отличный помощник
Отличный сайт с кучей полезных файлов, позволяющий найти много методичек / учебников / отзывов о вузах и преподователях.
Отлично помогает студентам в любой момент для решения трудных и незамедлительных задач
Хотелось бы больше конкретной информации о преподавателях. А так в принципе хороший сайт, всегда им пользуюсь и ни разу не было желания прекратить. Хороший сайт для помощи студентам, удобный и приятный интерфейс. Из недостатков можно выделить только отсутствия небольшого количества файлов.
Спасибо за шикарный сайт
Великолепный сайт на котором студент за не большие деньги может найти помощь с дз, проектами курсовыми, лабораторными, а также узнать отзывы на преподавателей и бесплатно скачать пособия.
Популярные преподаватели
Добавляйте материалы
и зарабатывайте!
Продажи идут автоматически
5137
Авторов
на СтудИзбе
440
Средний доход
с одного платного файла
Обучение Подробнее