Concepts with Symbian OS (779878), страница 29
Текст из файла (страница 29)
Doesserializable access have the same performance hit? Explain youranswer.4.Prove that the following algorithm (the final solution to synchronizing two processes shown in Section 6.1) does indeed adhere tothe three criteria of mutual exclusion, no starvation and boundedwaiting.while (true){ready[myID] = true;turn = nextID;while (ready[nextID] && turn == nextID) ;// critical sectionready[myID] = false;// whatever else needs to be done}5.Show that if the manipulation of semaphores through wait() andsignal() were not atomic, that mutual exclusion may be violated.6.Should interrupts be disabled during the manipulation of semaphores? Explain.7.The following code shows the critical region from the discussionabout locks in Section 6.3. Rewrite it using semaphores.region time_buffer when (timing_count > 0){136PROCESS CONCURRENCY AND SYNCHRONIZATIONtiming_data = time_buffer[time_out];time_out = (time_out + 1) % buffer_size;timing_count --;display(timing_data);}8.Implementation of monitors can restrict the way semaphores areobtained and released.
Explain why a signal() call must be thelast call for a monitor implementation.9.Explain why a mutex is necessary in Symbian OS. Would asemaphore with a value of 1 also work?10.Explain why the two-tier implementations of mutexes and semaphores (in the nanokernel and the kernel) is necessary in SymbianOS.11.Are sockets based on the mail model or the phone model of IPC?Explain your answer.12.Symbian OS does not implement remote procedure calls. Can RPCbehavior be implemented with sockets? Explain.13.Why does an operating system need multiple types of locks?7Memory ManagementIn the last several chapters, we have discussed how the CPU can beshared as a resource among processes. By proper scheduling and usingconcurrency, we can use the CPU more efficiently and increase theperformance of the operating system.
There are other resources in acomputer system that also require sharing; after the CPU, a computer’smemory is one of the most crucial. Proper sharing of memory also affectsan operating system’s efficiency and performance.In this chapter, we discuss memory management. We develop thebackground and concepts necessary for this discussion and discuss management techniques. Many of these management concepts apply todesktop computers and servers, but some do not work with handheldunits and smartphones. So we spend some time discussing systems thatdo not use all memory-management schemes. We use Symbian OS as anexample of smartphone memory management.Before we get started, the type of memory we are concerned withshould be made clear. We are not concerned with what would normallybe called secondary storage, such as hard disk space.
Neither are weconcerned with fast, on-chip storage, such as registers or caches. Weare concerned with memory used for execution of programs – whichcould be main memory connected by bus to the CPU or RAM storage.The main qualifier is that the memory be used for program execution.138MEMORY MANAGEMENT7.1 Introduction and BackgroundLike the CPU in a computer, memory is a resource that every process ina system must use. Like context switching on a CPU, proper sharing ofmemory by processes affects the entire computer system’s performance.Consider a scenario where a context switch means clearing memory and initializing it with the incoming process’s data. This scenariowould have much overhead built into it: in addition to a context switch(already a costly procedure), this scenario would have an operating system taking the time to save the execution environment for the process,wipe out memory, and pull in the memory image for the new process.
The memory images – from the previous process and the incomingprocess – would have to be either saved or restored from a backingstore – probably a hard disk. Hard disks are slow and I/O time wouldbecome a bottleneck.Clearly, memory cannot be used exclusively by one process at atime. It must be shared. Sharing memory – without constant movementof memory blocks – means that multiple programs (we referred to theseas processes in Chapter 4) occupy memory at the same time. Further, thisimplies that programs might be in arbitrary locations in memory – andprobably not the same locations each time a process is brought onto theCPU to execute. This presents a tricky situation. Each program cannotknow where it will be placed in memory and therefore is written believingit alone is using that memory.
On top of all this, we must also be able tostructure the environment so processes cannot trespass on each other’smemory areas.So our situation is complex: processes must share memory, but cannotknow ahead of time what memory they will be using. Processes mustbelieve they have all memory to use, but in reality are cordoned off intomemory sections that cannot stray into each other. Processes must reador write data using locations they cannot know ahead of time.
This isindeed a situation in need of some simplifying.From Source Code to MemoryA process takes many forms as it moves from textual source code to abinary, executing memory image. Consider the steps toward executionas they are pictured in Figure 7.1. There are several stages in this processwhere data and instructions can be bound to memory addresses.INTRODUCTION AND BACKGROUND139Source programCompilerObject codeOther files ofobject codeLink editorExecutablefileStaticallyloadedsystem librariesLoaderDynamicallyloaded systemlibrariesMemory imageof executingprogramFigure 7.1 From source code to executing programLife for a process begins as a source program written in a programming language.
The source code usually goes through a compiler tobe translated into machine language for execution. Sometimes programsare translated directly into the form that is executed, but it is mostlikely that executing programs are built from several different modules.140MEMORY MANAGEMENTProgram modules represent pieces of programs that are built individually and then combined to form the final executing unit. These otherobject modules are built by the programmer or contributed from othersources.This compiler stage is one place where components of a process can bebound to memory addresses. Absolute binding is the only type of addressbinding possible at compile time.
Address references in the machinecode can only be bound to actual addresses if the programmer knows theaddresses at compile time. This is a situation that almost never happensnow, but could happen for older operating systems. In early versionof MS-DOS, for example, when single programs ran to completionwithout context switching, the beginning address for memory referenceswas known and could be part of the compilation process – no memorysharing was going on.
Note that if the starting address of a program inmemory changes, then absolutely bound code must be recompiled.Whether there is one module or many, everything must be combinedfor loading. This is done by the link editor. The link editor combines allthe modules together into a single image. This image is composed ofprogram modules only; no system libraries have been loaded at this time.System libraries are combined as needed by the loader.While absolute binding is possible at load time, relocatable bindingis most often used.
When the programmer does not know at compiletime where the program will start in memory, code is generated in sucha way that it can be relocated easily. This can affect how programs arewritten as well as how the code is generated. Assembly code written forrelocatable execution cannot reference absolute addresses. For example,the assembler for the SPARC architecture abides by this rule by forcingprograms to use only labels (not even relative offsets) when referring toprogram code addresses or data locations.Code is relocated and bound at execution time. At this stage, system libraries can be loaded into memory and their addresses correctlyassigned.
Note that there are two types of assignments going on here. Program code is relocatable and bound when a binary image is loaded intomemory. In addition, libraries are loaded (if needed: they may already bein memory) and their address references are correctly bound within theprogram code. These two bindings represent execution-time binding.Execution-time binding is the most flexible type of address binding.As processes are context-switched, the program code moves in andout of memory and may change locations often.
In addition, library codebecomes unused as processes are context-switched and may be removed,INTRODUCTION AND BACKGROUND141only to be loaded into different memory locations as they are needed. Allthis chaotic activity requires flexible execution-time binding.Determining Module DependenciesIt is not obvious from running software what modules it depends on. Youcan determine this using an analysis program. On Solaris and Linux, theldd command helps with this.ldd /usr/bin/lsFor example, running the command above on a Solaris system givesthe following output, showing three library dependencies:libc.so.1 =>/usr/lib/libc.so.1libdl.so.1 =>/usr/lib/libdl.so.1/usr/platform/SUNW,Ultra-4/lib/libc_psr.so.1On Microsoft Windows, you need third-party software, but you canlist dependencies. For example, the screenshot in Figure 7.2 shows thedependencies for a program called depends.exe.Logical and Physical AddressingIssues of address binding lead us to the difference between logical andphysical addresses.