Smartphone Operating System (779883), страница 8
Текст из файла (страница 8)
This is notunusual; application programs typically wait for devices. However, if theoperating system were to wait for the results from the device, no otheroperating system duties would be performed. All other activities in thecomputer would therefore wait as well.Consider an example in which an application tries to send a textmessage. After setting up the message data, the application initiates theApplication requestOperating systemDevice driverInterrupt vectorHardware data transferTimeFigure 2.5 The control pathway for synchronous device I/OCOMPUTER STRUCTURES27transfer by signaling the mobile phone device to transfer the message.This request goes through an operating system API, which communicatesthrough this level to the device driver and on to the hardware to sendthe message.
It might be acceptable for the application to wait until themessage is sent. However, if the operating system was forced to wait forthe message, it would have to suspend all other services. That wouldmean that alarms would not be displayed and incoming phone callswould be ignored. If the message took a lengthy period of time to send,the phone would just freeze up until the message was finally on its way.Obviously, this is not a good situation.The method of device communication that waits through the communication cycle is called synchronous communication. Synchronouscommunication causes all stages in the process to wait. This type ofcommunication is good for real-time systems, where the system is dedicated to I/O and processing of received data, but not very useful forgeneral-purpose systems.Most general-purpose I/O is asynchronous.
That is, other operationscan continue while waiting for I/O to complete. An I/O sequence likethat in Figure 2.6 must occur.The hardware should signal that the transfer has begun and signalagain when the results of the I/O request are in. Using this method, theoperating system is free to process other requests and the applicationApplication requestOperating systemDevice driverInterrupt vectorHardwaredata transferTimeFigure 2.6The control pathway for asynchronous device I/O28THE CHARACTER OF OPERATING SYSTEMScan even go on to do other things.
(Often this method of I/O is best forapplications that must work with a graphical user interface, which mustusually be updated as the data request is being processed.)The use of asynchronous device I/O means that an operating systemmust keep track of the state of devices. If the operating system is going to‘get back’ to handling a device after it has serviced an I/O request, it hasto keep track of what was happening with that device and where it waswhen it last worked with it. This record-keeping function of an operatingsystem is a very important one, one that keeps an operating system busymuch of the time and one that potentially takes up a lot of the memoryneeded to run an operating system.In the quest to minimize the involvement of the operating system indevice I/O, more I/O functionality can be placed on the device withthe addition of more interrupts to enable communication.
Taken to anextreme, a device could do all I/O by itself, filling a specific area inshared memory with data and signaling the operating system only whendata transfer is complete. This method of I/O is called direct memoryaccess (DMA) and is extremely useful in freeing up operating systemand application time. DMA is a form of asynchronous I/O, but differsfrom the generic form. Asynchronous I/O is fine-grained: it signals theCPU whenever there is even a small amount of data to transfer. DMAis very coarse-grained and assigns all data operations to the device. Theoperating system starts the I/O operation and is only notified when it iscomplete.There are, then, three modes of device communication: synchronous,asynchronous and DMA.• A handheld Linux device that plays video is likely to use synchronouscommunication between the video driver and the operating system.
Display of video is a real-time application and most real-timeapplications require synchronous I/O.• Computers with windowing systems use asynchronous I/O to monitorGUI devices such as a mouse. When a mouse moves, it generatesinterrupts that cause the operating system to read the mouse events.When the mouse does not move, the operating system can safelyignore it and move on to other duties.• Computers use DMA for larger I/O tasks. Consider reading from adisk drive.
It is enough that an operating system would send a diskdrive a command to read a block of data, along with the parametersCOMPUTER STRUCTURES29needed to complete the transfer. Reading program code from a diskto execute, for example, is usually a task that is executed using DMA.Each I/O method carries with it implications for system performance.With synchronous I/O, the operating system spends all its time monitoringand servicing devices. This means that performance and response to usersand other services is slower than with other methods.
Asynchronous I/Orelieves the operating system from constant monitoring and, therefore,performance and system response increases. DMA frees the operatingsystem from almost all device I/O responsibilities and therefore producesthe fastest system service and response time. Most operating systems usea combination of methods to gain an efficient design.Storage StructuresAlong with central computer operation and device I/O, storage makesa third essential component of a computer system.
The ability to recordinformation and refer to it again is foundational to the way moderncomputer systems work. A system without storage would not even beable to run a program, since modern systems are based on storedprograms.2 Even if it was able to run instructions (perhaps asking the userfor each instruction), input could not be stored and output could only begenerated one byte at a time.The core computing cycle is very dependent on storage. This corecomputing cycle, often referred to as the ‘fetch–execute’ cycle, fetches aninstruction from memory (storage), puts the instruction in a register (morestorage), executes that instruction by possibly fetching more information(more storage), and storing the results of the execution in memory(even more storage).
This basic computing cycle is part of a design firstdeveloped by John von Neumann, who built it into a larger computingsystem based on sequential computer memory and external storagedevices.The many storage mechanisms of a computer system can be viewedas a hierarchy, as shown in Figure 2.7. Viewing these systems togetherallows us to consider their relationships with one another.2Certainly, computers without disk storage are used every day.
But note that even thesecomputers have memory for storage – sometimes large amounts of it. Any computer systemhas storage at least in the form of memory or registers accessible by the CPU. Most systemsbuild their storage requirements from there.30THE CHARACTER OF OPERATING SYSTEMSRegistersCacheMain memoryDisk spaceOptical storageArchival storageFigure 2.7 Storage hierarchy• Registers are at the top of the hierarchy. This collection representsthe fastest memory available to a computer system and the mostexpensive. Depending on how a processor is constructed, there maybe a small or large set of these memory cells. They are typically usedby the hardware only, although an operating system must have accessto (and therefore knowledge of) a certain set of them. Registers arevolatile and therefore represent temporary storage.• Storage caches represent a buffer of sorts between fast register storageand slower main memory.
As such, caches are faster and moreexpensive than main memory, but slower and cheaper than registermemory. On a typical computer system, the caching subsystem isusually broken into sublevels, labeled ‘L1’, ‘L2’ and so forth. Thehierarchy continues to apply to these sublevels; for example, L1 cachesCOMPUTER STRUCTURES31are faster and more expensive than L2 caches. Caches represent amethod to free up the hardware from waiting for reads or writes tomain memory. If an item being read exists in cache, then the cachedversion is used.
If data needs to be written, then the cache controllertakes care of the writing and frees up the CPU for more programexecution. Caches are volatile and therefore also represent temporarystorage.• Main memory represents the general-purpose temporary storage structure for a computer system. Program code is stored there while theprogram is executing on the CPU.
Data is stored in the main memory temporarily while a program is executing. The I/O structures,discussed in the previous section, use main memory as temporarystorage for data. This type of memory is usually external to the CPUand is sometimes physically accessible by the user (for example, ondesktop systems, users can add to main memory or replace it).• Secondary storage is a slower extension of main memory that holdslarge quantities of data permanently. Secondary storage is used to storeboth programs and data. The first – and still most common – form ofsecondary storage is magnetic disks.
These store bits as small chunksof a magnetic medium, using the polarity of magnetic fields to indicatea 1 or a 0. Faster storage has evolved more recently in the form ofelectronic disks, large collections of memory cells that act as a disk.Formats such as compact-flash cards, secure-digital cards and miniSD cards all provide permanent storage that can be accessed in arandom fashion.
These are used in the same way as magnetic mediato manipulate file systems.• Tertiary, or archival, storage is meant to be written once for archivalpurposes and stored for a long period of time. It is not intended tobe accessed often, if ever. Therefore, it can have slow access timesand slow data-retrieval rates. Examples here are magnetic tape andoptical storage such as compact discs (CD-ROMs). CD-ROMs can bethought of as lying between secondary and tertiary storage, becauseaccess time on CDs is quite good.There are several concepts built into this storage hierarchy that affecthow an operating system treats each storage medium. The first, and mostbasic, is the model used to access storage. The idea of a file as a group32THE CHARACTER OF OPERATING SYSTEMSof data having a specific purpose has been the model of access usedsince almost the invention of permanent storage.
If many files can bestored on a medium, there is also the need for organization of thosefiles. Ideas such as directories and folders have been developed for thisorganization. The way that these concepts have been implemented iscalled a file system. The design and appearance of file systems differsacross operating systems while the concepts of files and the structure ofdirectories remain constant.The concept of access rights has proven useful in implementing securestorage.