Microprocessors (794225)
Текст из файла
Moscow State University by M.V. Lomonosov
The Faculty of Computational Mathematics and Cybernetics
“Microprocessors”
Pechenezhskiy Konstantin
group 206
Moscow, 2008.
Introduction
The theme mentioned in this abstract, always interested me. For a long time I wished to learn, how the small slice of metal and silicon can do any operations; how at a level of microcircuits commands are executed; with what processing of the information begins.
Therefore the abstract contains the basic data on the device of processors, from internal structure and principles of work. Also the basic types of processing of the entrance information, construction of internal structure of the processor, the improvements which have occurred during development of new models of processors are considered.
The report is interesting also because it tells us about those technologies and principles on which we do not reflect at all. For example, about the acceleration of processing of the information in the processor.
Dissecting the Heart of Your Computer
The central processing unit (CPU) is the heart of your computer. This vital component, often referred to simply as the microprocessor (or even just processor), is in some way responsible for every single thing your computer does. It determines, at least in part, which operating systems you can use, which software packages are available to you, how much energy your PC uses, and how stable your system will be, among other things. The processor also dictates how much your system will cost: The newer and more powerful the processor, the more expensive the machine.
THE MAKEUP OF A MICROPROCESSOR
You may think a processor is the square or rectangular piece with many pins that fits into the processor slot on your motherboard, but actually that is just the packaging that contains the processor. The processor itself is a small, thin chip of silicon crystal, typically less than half a square inch in area. The packaging both protects the processor from contaminants (such as the air) and allows it, through the pins, to engage the motherboard's circuits and hence the system as a whole. The millions of electronic switches (the transistors) inside the processor need a carefully controlled environment in which to function.
Although most processors are made of silicon, any semiconductor material will do, as long as it can be fabricated into high-quality pieces of the necessary size. Silicon is widely available and inexpensive because of its ubiquitous use, and it is therefore the most popular material. Silicon works well, because it can form large crystals of uniformly high quality; each crystal is about 8 inches across, which is important because manufacturers want to cut each crystal into as many chips as possible. Precision saws cut the crystal into slices less than a millimeter thick. These slices, called wafers, are chemically treated before being cut into individual chips. The process for physically applying the logical design of the processor to the chip is called photolithography; in this step, transistors and tiny wires are built onto the chip in a series of ten or more layers (called masks). Once this layering is complete, the chip is tested several times to ensure that the transistors and wires are in place and working properly, and then the chip is placed within its packaging. The packaging not only protects the chip but also dissipates heat and allows the processor to connect to the motherboard. Over the years packaging has changed considerably, with new methods adopted for various processor designs. The first Intel chips used dual inline packages (DIPs), in which two parallel sets of 40 or more pins provided the connection to the motherboard. Because of the parallel design, upgrades to this package could not accommodate significant expansion of connectors: The package would simply get too long for the motherboard as pins were added, and signals from the end pins would require much more time to reach the processor chip than signals from closer pins. For these reasons, the 80286 processor introduced the pin-grid array (PGA) package. This package is typically square, with two, three, or even four rows of evenly spaced pins arranged around a central area. The pins fit into the corresponding holes of the socket module on the motherboard, and typically the package is locked in place by a levered arm.
The square (or squarish) package design has remained dominant. As the quest for more capable processors grew, wider buses were needed and consequently more pins were required to fit these buses, and many alterations of the package began to appear. Pentium processors use the staggered pin-grid array (SPGA) design, which staggers the arrangement of the pins to allow them to fit closer together. The Pentium Pro, because it has separate chips for the CPU and the Level 2 cache, uses a design called the multi-chip module (MCM). An MCM is a package that contains more than one chip. Another recent package, the leadless chip carrier (LCC), uses tiny contact pads of gold instead of pins to make contact with the motherboard.
Other packages include the tape-carrier package (TCP) — which is as thin as photographic film and is soldered to the motherboard — and the single-edge contact (SEC) cartridge. This is actually a PGA package mounted on a small daughter-card that attaches to the motherboard through a single-edge connector.
INSIDE THE PROCESSOR
Fundamentally all processors do the same thing. They take signals in the form of 0s and 1s (thus binary signals), manipulate them according to a set of instructions, and produce output in the form of Os and Is. The voltage on the line at the time a signal is sent determines whether the signal is a 0 or a 1. On a 3.3-volt system, an application of 3.3 volts means that it's a 1, while an application of 0 volts means it's a 0.
Processors work by reacting to an input of Os and 1s in specific ways and then returning an output based on the decision. The decision itself happens in a circuit called a logic gate, each of which requires at least one transistor, with the inputs and outputs arranged differently by different operations. The fact that today's processors contain billions of transistors offers a clue as to how complex the logic system is. The processor's logic gates work together to make decisions using Boolean logic, which is based on the algebraic system established by mathematician George Boole. The main Boolean operators are AND, OR, NOT, and NAND (not AND); many combinations of these are possible as well. An AND gate outputs a 1 only if both its inputs were 1s. An OR gate outputs a 1 if at least one of the inputs was a 1. And a NOT gate takes a single input and reverses it, out-putting 1 if the input was 0 and vice versa. NAND gates are very popular, because they use only two transistors instead of the three in an AND gate yet provide just as much functionality. In addition, the processor uses gates in combination to perform arithmetic functions; it can also use them to trigger the storage of data in memory.
Logic gates operate via hardware known as a switch - in particular, a digital switch. In the good old days of room-size computers (which looked lots more impressive in movies than today's machines), the switches were actually physical switches, but today nothing moves except the current itself. The most common type of switch in today's computers is a transistor known as a MOSFET (metal-oxide semiconductor field-effect transistor). This kind of transistor performs a simple but crucial function: When voltage is applied to it, it reacts by turning the circuit either on or off. Most PC microprocessors today operate at 3.3V, but earlier processors (up to and including some versions of the Pentium) operated at 5V. With one type of MOSFET— which will be the focus here—an incoming current at or near the high end of the voltage range switches the circuit on, while an incoming current near 0 switches the circuit off.
Millions of MOSFETs act together, according to the instructions from a program, to control the flow of electricity through the logic gates to produce the required result. Again, each logic gate contains one or more transistors, and each transistor must control the current so that the circuit itself will switch from off to on, switch from on to off, or stay in its current state.
A quick look at the simple AND and OR logic-gate circuits shows how the circuitry works. Each of these gates acts on two incoming signals to produce one outgoing signal. Logical AND means that both inputs must be 1 in order for the output to be 1; logical OR means that either input can be 1 to get a result of 1. In the AND gate, both incoming signals must be high-voltage (or a logical 1) for the gate to pass current through itself. Otherwise the circuit will remain turned off, giving you a logical 0. In the OR gate, as long as either incoming current is high, the gate will allow the current through.
The flow of electricity through each gate is controlled by that gate's transistor. However, these transistors aren't individual and discrete units. Instead, large numbers of them are manufactured from a single piece of silicon (or other semiconductor material) and linked together without wires or other external materials. These units are called integrated circuits (ICs), and their development basically made the complexity of the microprocessor possible. The integration of circuits didn't stop with the first ICs. Just as the first ICs connected multiple transistors, multiple ICs became similarly linked, in a process known as large-scale integration (LSI); eventually such sets of ICs were connected, in a process called (using the industry's deeply creative naming techniques) very large-scale integration (VLSI). Intel's first claim to fame lay in its high-level integration of all the processor's logic gates into a single complex chip. The first processor to do this was the Intel 4004, the forerunner of all of today's Intel offerings.
Two of the most crucial components of the processor are the registers and the system clock. A register is an internal storage area, a unit of memory; and because it is part of the processor, it has the fastest type of memory in your system. Its function is to hold data used by instructions, in the form of bit patterns (sequences of Os and 1s), in specific places where the processor can find them. The importance of the registers is demonstrated by the fact that processors are identified in one significant way by register size. The term 16-bit processor refers to a processor with registers capable of holding 16 bits of data. Therefore, 32-bit processors have 32-bit register sizes, and 64-bit processors have double that. The greater the number of bits in a register, the more information the processor can process at once.
The processor spends its time reacting to signals, but it can't react to all of them at the same time or they would become jumbled. Instead, the processor waits until it is given the go-ahead to receive a signal; how long it waits is determined by the system clock. At precise intervals, the system clock sends electrical pulses as a means of polling the system for waiting instructions. If an instruction is waiting and the processor is not already busy with previous instructions, the processor brings the instruction in and works on it. The number of instructions the processor can handle in a single clock cycle (one pulse of the system dock) depends on the design of the processor itself. The first microprocessors were able to handle only one instruction per cycle, but today's processors speed this up considerably through two processes, called pipelining and superscalar execution, pipelining allows the processor to read a new instruction from memory before it is finished processing the current instruction. In some processors, several instructions can be worked on simultaneously. The extent to which pipelined data can flow into the processor is called the pipeline depth. Up through the 80286, Intel processors had a pipeline depth of only 1 (in effect, there was no pipeline at all), but with the 80486 family, the pipeline depth jumped to 4; up to four instructions could be in different pipeline stages.
A superscalar processor has more than one pipeline, meaning it can execute more than one set of instructions at the same time. Theoretically this can double performance, but usually one of the pipelines ends up waiting for an instruction to finish in another pipeline.
INSTRUCTIONS
Computers run on low-level commands called instructions. Low-level means that these commands work directly with the processor, in effect communicating with the processor's most basic capabilities. Each type of processor has a specific group of these commands on which it can act; this group is called the processor's instruction set.
The processor's instructions are accessible to human programmers through various programming languages. The instructions themselves are written in machine language, the lowest-level language of all, which consists solely of numbers and thus is rarely used by programmers. To get around this difficulty, programmers turn either to assembly language, which uses the same instructions but gives them names (such as add), or to a high-level language (HLL), in which the machine instructions are encompassed within larger-scale commands.
HLLs don't dispense with machine instructions in any way; they simply make them easier to work with. A program written in an HLL must be compiled, usually first into an internal intermediate language, then to machine language. The two stages, called a compiler front end and back end, allow the compiler writer to separate the parts of the process that are architecture-neutral from those that are architecture-specific. Ultimately, the processor must receive numerical instructions to do anything at all.
Typical instructions for the x86 instruction set, which has formed the basis of the PC environment for years, include commands for such activities as arithmetic functions, data movement, logical instructions, and input/output instructions. Arithmetic instructions include add, which adds the contents of different registers together, and inc (increment), which adds 1 to the value in the register. Data movement instructions include mov, which moves data from one register or memory address to another register or memory address, and xchng (exchange), which swaps the values in two different registers or memory addresses. All programs consist of combinations of the wide variety of instructions available to the processor.
Характеристики
Тип файла документ
Документы такого типа открываются такими программами, как Microsoft Office Word на компьютерах Windows, Apple Pages на компьютерах Mac, Open Office - бесплатная альтернатива на различных платформах, в том числе Linux. Наиболее простым и современным решением будут Google документы, так как открываются онлайн без скачивания прямо в браузере на любой платформе. Существуют российские качественные аналоги, например от Яндекса.
Будьте внимательны на мобильных устройствах, так как там используются упрощённый функционал даже в официальном приложении от Microsoft, поэтому для просмотра скачивайте PDF-версию. А если нужно редактировать файл, то используйте оригинальный файл.
Файлы такого типа обычно разбиты на страницы, а текст может быть форматированным (жирный, курсив, выбор шрифта, таблицы и т.п.), а также в него можно добавлять изображения. Формат идеально подходит для рефератов, докладов и РПЗ курсовых проектов, которые необходимо распечатать. Кстати перед печатью также сохраняйте файл в PDF, так как принтер может начудить со шрифтами.