Computer Science. The English Language Perspective - Беликова (1176925), страница 17
Текст из файла (страница 17)
At any given instance intime, the CPU is accessing one page of a process. At that point,it does not really matter if the other pages of that process areeven in memory.Process management. Another important resource that anoperating system must manage is the use of the CPU byindividual processes. Processes move through specific states asthey are managed in a computer system. A process enters thesystem (the new state), is ready to be executed (the ready state),is executing (the running state), is waiting for a resource (thewaiting state), or is finished (the terminated state). Note thatmany processes may be in the ready state or the waiting state atthe same time, but only one process can be in the running state.While running, the process might be interrupted by theoperating system to allow another process its chance on CPU.101In that case, the process simply returns to the ready state.
Or, arunning process might request a resource that is not availableor require I/O to retrieve a newly referenced part of theprocess, in which case it is moved to the waiting state. Arunning process finally gets enough CPU time to complete itsprocessing and terminate normally. When a waiting processgets the resource it is waiting for, it moves to the ready stateagain.The OS must manage a large amount of data for each activeprocess. Usually that data is stored in a data structure called aprocess control block (PCB).
Generally, each state is representedby a list of PCBs, one for each process in that state. When aprocess moves from one state to another, its corresponding PCBis moved from one state list to another in the operating system.A new PCB is created when a process is first created (the newstate) and is kept around until the process terminates.The PCB stores a variety of information about the process,including the current value of the program counter, whichindicates which instruction in the process is to be executed next.As the life cycle indicates, a process may be interrupted manytimes during its execution.
Interrupts are handled by theoperating system’s kernel. Interrupts may come from either thecomputer’s hardware or from the running program. At eachpoint, its program counter must be stored so that the next timeit gets into the running state it can pick up where it left off.The PCB also stores the values of all other CPU registers forthat process. These registers contain the values for the currentlyexecuting process (the one in the running state). Each time aprocess is moved to the running state, the register values for thecurrently running process are stored into its PCB, and theregister values of the new running state are loaded into theCPU. This exchange of register information, which occurs whenone process is removed from the CPU and another takes itsplace, is called a context switch.PCB also maintains information about CPU scheduling.102CPU scheduling is the act of determining which process in theready state should be moved to the running state.
There are twotypes of CPU scheduling:- non-preemptive scheduling, which occurs when the currentlyexecuting process gives up the CPU voluntarily (when aprocess switches from the running state to the waiting state, orwhen a program terminates);- preemptive scheduling, which occurs when the operating systemdecides to favor another process preempting the currentlyexecuting process.First-come, first-served CPU scheduling gives priority to theearliest arriving job.
The-shortest-job-next algorithm givespriority to jobs with short running times. Round-robinscheduling rotates the CPU among active processes giving alittle time to each.For many applications, a process needs exclusive access to notone resource, but several. Suppose, for example, two processeseach want to record a scanned document on a CD. Process Arequests permission to use the scanner and is granted it.Process B is programmed differently and requests the CDrecorder first and is also granted it. Now A asks for the CDrecorder, but the request is denied until B releases it.Unfortunately, instead of releasing the CD recorder B asks forthe scanner.
At this point both processes are blocked. Thissituation is called a deadlock. Deadlocks can occur both onhardware and software resources.FeaturesMultiprocessingMultiprocessing involves the use of more than one processingunit, which increases the power of a computer.Multiprocessing can be either asymmetric or symmetric.Asymmetric multiprocessing essentially maintains a singlemain flow of execution with certain tasks being “handed over”by the CPU to auxiliary processors.Symmetric multiprocessing (SMP) has multiple, full-fledgedCPUs, each capable of the full range of operations. The103processors share the same memory space, which requires thateach processor that accesses a given memory location be able toretrieve the same value.
This coherence of memory is threatenedif one processor is in the midst of a memory access whileanother is trying to write data to that same memory location.This is usually handled by a “locking” mechanism that preventstwo processors from simultaneously accessing the samelocation.A subtler problem occurs with the use by processors of separateinternal memory for storing data that is likely to be needed.One way to deal with this problem is called bus snooping. EachCPU includes a controller that monitors the data line formemory location being used by other CPUs.
Alternatively, allCPUs can be given a single shared cache. While lesscomplicated, this approach limits the number of CPUs to themaximum data-handling capacity of the bus.Larger-scale multiprocessing systems consist of latticelikearrays of hundreds or even thousands of CPUs, which arereferred to as nodes.MultiprogrammingIn order for a program to take advantage of the ability to run onmultiple CPUs, the operating system must have facilities tosupport multiprocessing, and the program must be structuredso that its various tasks are most efficiently distributed amongthe CPUs. These separate tasks are generally called threads. Asingle program can have many threads, each executingseparately, perhaps on a different CPU, although that is notrequired.The operating system can use a number of approaches toscheduling the execution of processes or threads.
It can simplyassign the next idle (available) CPU to the thread. It can alsogive some threads higher priority for access to CPUs, or let athread continue to own its CPU until it has been idle for somespecified time.The use of threads is particularly natural for applications wherea number of activities must be carried on simultaneously.104Support for multiprogramming and threads can now be foundin versions of most popular programming languages, and somelanguages such as Java are explicitly designed to accommodateit.Multiprogramming often uses groups or clusters of separatemachines linked by a network.
Running software on suchsystems involves the use of communication protocols such asMPI (message-passing interface).MultitaskingUsers of modern operating systems such as Microsoft Windowsare familiar with multitasking, or running several programs atthe same time. Each running program takes turns in using thePC’s central processor. In early versions of Windows,multitasking was cooperative, with each program expected toperiodically yield the processor to the Windows so it could beassigned to the next program in the queue.
Modern versions ofWindows (as well as operating systems such as UNIX) usepreemptive multitasking. The operating system assigns a sliceof processing (CPU) time to a program and then switches it tothe next program regardless of what of what might behappening to the previous program.Systems with preemptive multitasking often give programs ortasks different levels of priority that determine how big a sliceof CPU time they will get. Also, the operating system can moreintelligently assign CPU time according to what a givenprogram is doing.Even operating systems with preemptive multitasking canprovide facilities that programs can use to communicate theirown sense of their priority. In UNIX systems, this is referred toas niceness.
A nice program gives the operating systempermission to interrupt lengthy calculations so other programscan have a turn, even if the program’s priority would ordinarilyentitle it to a greater share of the CPU.Multitasking should be distinguished from two similarsounding terms. Multitasking refers to entirely separateprograms taking turns executing on a single CPU.105Multithreading, on the other hand, refers to separate pieces ofcode within a program executing simultaneously but sharingthe program’s common memory space.
Finally, multiprocessingor parallel processing refers to the use of more than one CPU ina system, with each program or thread having its own CPU.Notes:PCB (Process Control Block) - блок управления процессором(БУП)PMT ( Page-Map Table) - таблица страницAssignments1. Translate the sentences from the text into Russian inwriting paying attention to the underlined words andphrases:1. As long as we keep track of where the program isstored, we are always able to determine the physicaladdress that corresponds to any given logical address.2.
At any point in time in both fixed and dynamicpartitions, memory is divided into a set of partitions,some empty and some allocated to programs.3. Paged memory management puts much more burdenon the operating system to keep track of allocatedmemory and to resolve addresses.4. Thus, the pages of a process may be scattered around,out of order, and mixed among the pages of otherprocesses.5. Interrupts may come from either the computer’shardware or from the running program.6.