Т.А. Волошина, Л.Б. Саратовская - English Reader in Computer Science (1098536), страница 10
Текст из файла (страница 10)
There have always been debates over the usefulness and performance of assembly language relative to high-level language, though this gets less attention today. Assembly language has specific niche uses where it is important. The complexity of modern processors makes effective hand-optimization increasingly difficult. This has made raw execution speed a non-issue for most programmers.
There are really a handful of situations where today’s expert practitioners would choose assembly language. There are some of them:
When interacting directly with the hardware.
When extreme optimization is required, e.g., in an inner loop in a processor-intensive algorithm. Some game programmers are experts at writing code that takes advantage of the capabilities of hardware features in systems enabling to run faster.
When complete control over the environment is required (for example in extremely high security situations, where nothing could be taken for granted).
When writing computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system.
ASM is also still used for writing games and other software for graphics calculators.
For any given personal computer, mainframe, embedded system, and game console, both past and present, at least one - possibly dozens - of assemblers have been written.
Some higher level computer languages, such as C and Borland Pascal, support inline assembly where relatively brief sections of assembly code can be embedded into the high level languages code.
Assembly language is still taught in most Computer Science and Electronic Engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain very important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer’s behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn:
-
The basic concepts
-
To recognize situations where the use of assembly language might be appropriate
-
To see how efficient executable code can be created from high-level languages
Ex.1. Answer the following questions:
-
What is the difference between assembly language and high-level language?
-
How does assembler create an object code?
-
What does a program written in assembly language consist of?
-
What is typical of instructions (statements) in assembly language?
-
What were historically the biggest reasons for using ASM?
-
What is ASM used for?
Ex.2. Give the main ideas of the text in logical order.
Ex.3. Translate in writing:
Первые языки ассемблера были разработаны в 1950-х, и их стали называть языками программирования второго поколения. Они практически отменили необходимость в подверженном ошибкам и неэффективном по времени программировании на языках первого поколения, использовавшихся в первых компьютерах, таким образом освобождая программиста от утомительного запоминания численных кодов и подсчета адресов.
Когда-то они широко использовались для самого разного программирования. Но в 80-х (в небольших компьютерах – в 90-х) их практически вытеснили языки высокого уровня. Сегодня язык ассемблера используется в основном для прямого управления устройствами, доступа к специальным инструкциям процессора или для решения чувствительных к производительности задач. Часто используется при написании драйверов устройств, встроенных систем низкого уровня и ОС реального времени.
Многие более сложные ассемблеры предоставляют дополнительные механизмы для облегчения разработки программ, контролирования процесса трансляции и помощи в поиске ошибок. В частности, большинство современных ассемблеров включают в себя механизм для работы с макросами и называются макро-ассемблерами.
Ex.4. Topics for discussion.
-
Key features of assemblers.
-
Use of assembly language.
-
The importance of assembly language.
UNIT I2
Future Computers
Key vocabulary:
-
Harness v. – обуздывать,
покорять
-
Superposition n. – суперпозиция
-
Inherent adj. – присущий,
свойственный, неотъемлемый
-
Conventional adj. – обычный,
привычный, традиционный
-
Entanglement n. – (квантовое)
запутывание
-
Assume v. – принимать, брать
(на себя)
-
Devise v. – придумать,
изобретать, разрабатывать
-
Disturb v. – нарушать ход,
равновесие
-
Incredibly adv. – невероятно
-
Time-consuming adj. – занимающий много
времени
-
Confine v. – ограничивать
-
Occur v. – случаться,
прoисходить
-
Ultimate goal – конечная цель
-
Commonplace n. – обычная вещь,
банальность
-
Destination n. – место назначения,
цель
-
Employ v. – использовать
-
Rely on v. – полагаться,
надеяться, доверять
-
To some extent – до некоторой
степени
-
Intend v. – предназначать
-
Frequency sensitive – чувствительный,
быстро реагирующий на изменение частоты
-
Trapping n. – улавливание, захват
Quantum Computers
Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.
The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. In a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.
Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.
This superposition of qubits is what gives quantum computers their inherent parallelism. This parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.
Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.
The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers.
Optical Computers
An optical computer (also called a photonic computer) is a device that uses the photons in visible light or infrared (IR) beams, rather than electric current, to perform digital computations. An electric current flows at only about 10 percent of the speed of light. This limits the rate at which data can be exchanged over long distances, and is one of the factors that led to the evolution of optical fiber. By applying some of the advantages of visible and/or IR networks at the device and component scale, a computer might someday be developed that can perform operations 10 or more times faster than a conventional electronic computer.
Visible-light and IR beams, unlike electric currents, pass through each other without interacting. Several (or many) laser beams can be shone so their paths intersect, but there is no interference among the beams, even when they are confined essentially to two dimensions. Electric currents must be guided around each other, and this makes three-dimensional wiring necessary. Thus, an optical computer, besides being much faster than an electronic one, might also be smaller.
Some engineers think optical computing will someday be common, but most agree that transitions will occur in specialized areas one at a time. Some optical integrated circuits have been designed and manufactured. (At least one complete, although rather large, computer has been built using optical circuits.) Three-dimensional, full-motion video can be transmitted along a bundle of fibers by breaking the image into voxels. (A voxel is a unit of graphic information that defines a point in three-dimensional space.) Some optical devices can be controlled by electronic currents, even though the impulses carrying the data are visible light or IR.
Optical technology has made its most significant inroads in digital communications, where fiber optic data transmission has become commonplace. The ultimate goal is the so-called photonic network, which uses visible and IR energy exclusively between each source and destination. Optical technology is employed in CD-ROM drives and their relatives, laser printers, and most photocopiers and scanners. However, none of these devices are fully optical; all rely to some extent on conventional electronic circuits and components.
Today's computers use the movement of electrons in-and-out of transistors to do logic. Photonic computing is intended to use photons or light particles, produced by lasers, in place of electrons. Compared to electrons, photons are much faster – light travels about 30 cm, or one foot, in a nanosecond – and have a higher bandwidth.
Computers work with binary, on or off, states. A completely optical computer requires that one light beam can turn another on and off. This was first achieved with the photonic transistor, invented in 1989 at the Rocky Mountain Research Center. This demonstration eventually created a growing interest in making photonic logic componentry utilizing light interference.
Light interference is very frequency sensitive. This means that a narrow band of photon frequencies can be used to represent one bit in a binary number. Many of today's electronic computers use 64 or 128 bit-position logic. The visible light spectrum alone could enable 123 billion bit positions.
Recent research shows promise in temporarily trapping light in crystals. Trapping light is seen as a necessary element in replacing electron storage for computer logic. Recent years have seen the development of new conducting polymers which create transistor-like switches that are smaller, and 1,000 times faster, than silicon transistors.