Concepts with Symbian OS (779878), страница 20
Текст из файла (страница 20)
Describe two examplesof this and give your explanation as to why there is no performanceimprovement.3.Give two situations where kernel multithreading would definitely bean improvement over singlethreading.4.Give two situations where kernel multithreading would not be animprovement over singlethreading.5.Compare context-switching between user-level threads with contextswitching between kernel-level threads. Where are they the same?Where are they different?88PROCESSES AND THREADS6.Consider how a process is created. Compare the procedure with howa thread is created. How are resources used differently?7.Compare the creation of active objects with the creation of threads.How do you think these procedures are different?8.Give two situations where active objects would not be better thanthreading.
Explain your thinking.5Process SchedulingWe introduced the last chapter with a circus performer: a man that Iremember from childhood who kept plates spinning on sticks. He couldspin many plates at the same time. While his performance seemed to befocused on the spinning plates, I suspect that his real skill lay in the choicehe made after he paid attention to a single plate. In the split second wherehe ran from one plate to another, keeping each spinning on those longsticks, he had to make a choice as to the plate that needed his attentionmost.
If he chose poorly, at least one plate would begin to wobble andeventually fall off its stick and break. If he chose wisely, he kept all theplates spinning.We can think of this circus performer as a scheduler. He needs tomake important choices that schedule plates for ‘spin maintenance’.Some plates probably need his attention more than others and he needsto make his choices wisely, according to the needs of the plates.Computer operating systems are like that. They have a limited set ofCPUs (usually only one) that are to be used by many processes at once.As the operating system shares the computing resources, choices mustbe made. How long should a process operate on a CPU? Which processshould run next? How often do we check the system?The concept of scheduling a CPU is very important to keeping acomputer running quickly and efficiently. This chapter introduces thebasic ideas of CPU scheduling and presents several scheduling algorithms.We also examine how these concepts and algorithms apply to varioustypes of operating system architectures.90PROCESS SCHEDULING5.1 Basic ConceptsThe concepts involved with scheduling a CPU seem simple on theoutside but are really quite difficult upon closer inspection.
The ideaof multiprogramming is essentially a simple one: several processes shareprocessing time on a CPU. The idea of sharing is an easy concept to grasp.However, it is the mechanics of this sharing that is difficult. Processesmust be started appropriately and stopped at the right time, allowinganother process to take over the CPU. What is appropriate? How longdoes the process have the CPU? What is the next process to take over?These questions make sharing a difficult concept indeed.Concepts of SharingWe need to be clear on how the CPU is shared. The act of schedulingis the act of moving processes from the ready state to the running stateand back again. Recall that processes in the ready state are waiting in theready queue.
This ready queue is not necessarily a FIFO queue: processesdo not necessarily enter and leave in a fixed order. In fact the choice ofwhich process to move from the ready queue to running is at the heart ofprocess scheduling.The way CPU sharing is controlled is important. Methods of sharing should accommodate the way that a process works. For example,we could allow processes to share a processor at their own discretion. This would mean that sharing would be dependent on eachprocess – dependent on when each process decided to give up theprocessor.
This makes it easy for a process to hog the CPU and notgive it up. Obviously, this makes the operating system a bit simpler,but would not be a great way to equitably share things on a generalpurpose computer. We could also move scheduling decisions away fromeach process and give them to a third party – perhaps the operatingsystem. This would make scheduling less dependent on the whim ofeach process and more dependent on policies implemented by a centralcontroller.When a process moves from the running state to the ready statewithout outside intervention, we call the scheduling mechanism a nonpre-emptive mechanism.
Many movements from the running state arenon-pre-emptive. When a process moves to the waiting state or a processterminates, it does so by its own choice. In non-pre-emptive scheduling,a process may hang on to the CPU for as long as it wants (or needs) to.BASIC CONCEPTS91By contrast, pre-emptive scheduling allows the operating system tointerrupt a process and move it between states.
Pre-emptive scheduling is usually used for general-purpose operating systems because themechanism used can be fairer and processes can be simpler. However,pre-emptive scheduling can have costs associated with it. It requires morehardware support: timers must be implemented to support the timing criteria for processes and ways of switching between processes must besupported by registers and memory. The operating system must also provide secure ways of sharing information between processes. Consider twoprocesses sharing data between them.
If one is pre-empted as it is writingdata and the second process is then run on the CPU, it might beginto read corrupted data that the first process did not completely write.Mechanisms must be in place that allow the processes to communicatewith each other to indicate that such conditions exist.Pre-emptive scheduling affects how the operating system is designed.The kernel must be designed to handle interrupts for a context switch atany time – even the most inopportune times. For example, if a processmakes a system call that causes the kernel to make system changes butit is pre-empted, what happens to the changes made by the kernel? Thisis complicated by the chance that the next process might depend onthe changes made by the previous process’s system call.
Corruption ofsystem data is likely if this is not handled correctly. In a case like this, aLinux system would force the context switch to wait until the kernel modechanges were completed or an I/O call is made. This method ensures thatprocesses sharing one CPU serialize access to system resources. Even thisway of coordinating access to system resources is not sufficient when thereare multiple CPUs or the operating system supports real-time processing.Most modern operating systems use pre-emptive schedulers but thereare several examples of non-pre-emptive kernels. Microsoft Windows 3.1used a non-pre-emptive scheduler.
Applications could give up control inseveral ways. They could give it up knowingly or they could give it upthrough certain system calls or I/O functions.Early Apple Macintosh operating systems were also non-pre-emptivelyscheduled. Early systems were based on the Mach kernel, an open sourcedesign developed at Carnegie Mellon University to support operatingsystem research, primarily distributed and parallel computation. In version10, MacOS was based on FreeBSD (technically the XNU kernel), whichis pre-emptively scheduled.The part of the operating system that actually performs the pre-emptivecontext switch is called the dispatcher.
The dispatcher is comprised of92PROCESS SCHEDULINGthe set of functions that moves control of the CPU from one process toanother. The dispatcher must enter kernel mode, interrupt the processthat is currently running on the processor, move that process to the readyqueue, choose the next process to use, activate that process’s code,switch to user mode, and cause the processor to begin execution at theappropriate point in the new process.
This is a tall order; there are manyprocedures to be performed. While the dispatcher must run as fast aspossible, there is overhead involved with doing its job and therefore thereis a certain latency that is experienced when the dispatcher is called in todo a context switch. This dispatch latency is inherent in the system.Scheduling CriteriaThe dispatcher is supposed to make decisions about when to removea process from the CPU and which process to assign the CPU to next.The dispatcher makes its decisions using several criteria. There has beenmuch research devoted to the best way to schedule a CPU.The CPU must be kept as busy as possible. CPU utilization is acriterion that measures the percentage of time the CPU is busy; we wantthis percentage as high as possible.
Because of the reality of executingprograms, CPU utilization is rarely at 100 percent, but a well-managedsystem can achieve high CPU utilizations of 75 to 90 percent.Another measure of CPU activity is the amount of work done over aperiod of time. Called CPU throughput, this measure can be calculatedin several different ways. For example, the number of jobs per day is acoarse measure. A finer measure is the number of processes completed ina time unit. Short database transactions could be measured in processesper second while longer computations might be measured in processesper hour or per day.Another issue in scheduling is fairness, i.e., a measure of how muchtime each process spends on the CPU.
We want to make an effort to makethe times fair. Note here that ‘fair’ does not mean ‘equal’ in all situations.Sometimes, certain processes need to spend more time on the CPU thanothers.Turnaround time is yet another criterion upon which we can basescheduling decisions. Turnaround time refers to the amount of time aprocess takes to execute. This is measured from the time of definitionto the operating system (i.e., the time it left the create state) to the timeof termination (the time it entered the terminate state).