Real-Time Systems. Design Principles for Distributed Embedded Applications. Herman Kopetz. Second Edition (Real-Time Systems. Design Principles for Distributed Embedded Applications. Herman Kopetz. Second Edition.pdf), страница 8
Описание файла
PDF-файл из архива "Real-Time Systems. Design Principles for Distributed Embedded Applications. Herman Kopetz. Second Edition.pdf", который расположен в категории "". Всё это находится в предмете "(иус рв) архитектура управляющих систем реального времени" из 10 семестр (2 семестр магистратуры), которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр PDF-файла онлайн
Текст 8 страницы из PDF
A rule of thumb says that,in a digital system which is expected to behave like a quasi-continuous system,the sampling period should be less than one-tenth of the rise time d rise of the stepresponse function of the controlled object, i.e., d sample < (d rise/10). The computercompares the measured temperature to the temperature set point selected bythe operator and calculates the error term. This error term forms the basis for thecalculation of a new value of the control variable by a control algorithm. A giventime interval after each sampling point, called the computer delay d computer, thecontrolling computer will output this new value of the actuating variable tothe control valve, thus closing the control loop. The delay d computer should besmaller than the sampling period d sample.The difference between the maximum and the minimum values of the delay ofthe computer is called the jitter of the computer delay, Dd computer.
This jitter is asensitive parameter for the quality of control.The dead time of the control loop is the time interval between the observation ofthe RT entity and the start of a reaction of the controlled object due to a computeraction based on this observation. The dead time is the sum of the controlled objectdelay d object, which is in the sphere of control of the controlled object and is thusdetermined by the controlled object’s dynamics, and the computer delay d computer,which is determined by the computer implementation. To reduce the dead time in acontrol loop and to improve the stability of the control loop, these delays should beas small as possible. The computer delay d computer is defined by the time intervalbetween the sampling points, i.e., the observation of the controlled object, and theuse of this information (see Fig.
1.5), i.e., the output of the corresponding actuatorsignal, the actuating variable, to the controlled object. Apart from the necessary1.3 Temporal RequirementsFig. 1.5 Delay and delayjitter9observation of thecontrolled objectdelay jitter:variability of the delay Δddelay dcomputer output to real-timethe acutatorTable 1.1 Parameters of an elementary control loopSymbolParameterSphere of controlControlled object delayControlled objectdobjectdriseRise time of step responseControlled objectdsampleSampling periodComputerdcomputerComputer delayComputerDdcomputer Jitter of the computer delay ComputerddeadtimeDead timeComputer and controlledobjectRelationshipsPhysical processPhysical processdsample < < drisedcomputer < dsampleDdcomputer < < dcomputerdcomputer + dobjecttime for performing the calculations, the computer delay is determined by the timerequired for communication and the reaction time of the actuator.Parameters of a Control Loop. Table 1.1 summarizes the temporal parameters thatcharacterize the elementary control loop depicted in Fig.
1.3. In the first twocolumns we denote the symbol and the name of the parameter. The third columndenotes the sphere of control in which the parameter is located, i.e., what subsystemdetermines the value of the parameter. Finally, the fourth column indicates therelationships between these temporal parameters.1.3.2Minimal Latency JitterThe data items in control applications are state-based, i.e., they contain images of theRT entities.
The computational actions in control applications are mostly timetriggered, e.g., the control signal for obtaining a sample is derived from the progression of time within the computer system. This control signal is thus in the sphere ofcontrol of the computer system. It is known in advance when the next control actionmust take place. Many control algorithms are based on the assumption that the delayjitter Dd computer is very small compared to the delay d computer, i.e., the delay is closeto constant. This assumption is made because control algorithms can be designed tocompensate a known constant delay.
Delay jitter brings an additional uncertaintyinto the control loop that has an adverse effect on the quality of control. The jitter Ddcan be seen as an uncertainty about the instant when the RT-entity was observed.This jitter can be interpreted as causing an additional value error DT of the measuredvariable temperature T as shown in Fig. 1.6. Therefore, the delay jitter should alwaysbe a small fraction of the delay, i.e., if a delay of 1 ms is demanded then the delayjitter should be in the range of a few ms [SAE95].1 The Real-Time Environmenttemp.10Fig. 1.6 The effect of jitteron the measured variable Tadditionalmeasurementerror ΔTΔT =jitter Δd1.3.3dT(t)dtreal-timeMinimal Error-Detection LatencyHard real-time applications are, by definition, safety-critical.
It is therefore importantthat any error within the control system, e.g., the loss or corruption of a message orthe failure of a node, is detected within a short time with a very high probability.The required error-detection latency must be in the same order of magnitude as thesampling period of the fastest critical control loop. It is then possible to performsome corrective action, or to bring the system into a safe state, before the consequences of an error can cause any severe system failure. Almost-no-jitter systemswill have shorter guaranteed error-detection latencies than systems that allowfor jitter.1.4Dependability RequirementsThe notion of dependability covers the meta-functional attributes of a computersystem that relate to the quality of service a system delivers to its users during anextended interval of time.
(A user could be a human or another technical system.)The following measures of dependability attributes are of importance [Avi04]:1.4.1ReliabilityThe Reliability R(t) of a system is the probability that a system will provide thespecified service until time t, given that the system was operational at the beginning,i.e., t ¼ to. The probability that a system will fail in a given interval of time isexpressed by the failure rate, measured in FITs (Failure In Time).
A failure rate of1 FIT means that the mean time to a failure (MTTF) of a device is 109 h, i.e., onefailure occurs in about 115,000 years. If a system has a constant failure rate ofl failures/h, then the reliability at time t is given byRðtÞ ¼ expðlðt to ÞÞ;1.4 Dependability Requirements11where t to is given in hours. The inverse of the failure rate 1/l ¼ MTTF is calledthe Mean-Time-To-Failure MTTF (in hours).
If the failure rate of a system isrequired to be in the order of 109 failures/h or lower, then we speak of a systemwith an ultrahigh reliability requirement.1.4.2SafetySafety is reliability regarding critical failure modes. A critical failure mode is saidto be malign, in contrast with a noncritical failure, which is benign. In a malignfailure mode, the cost of a failure can be orders of magnitude higher than the utilityof the system during normal operation. Examples of malign failures are: an airplanecrash due to a failure in the flight-control system, and an automobile accident dueto a failure of a computer-controlled intelligent brake in the automobile. Safetycritical (hard) real-time systems must have a failure rate with regard to criticalfailure modes that conforms to the ultrahigh reliability requirement.Example: Consider the example of a computer-controlled brake in an automobile.
Thefailure rate of a computer-caused critical brake failure must be lower than the failure rateof a conventional braking system. Under the assumption that a car is operated about 1 h perday on the average, one safety-critical failure per million cars per year translates into afailure rate in the order of 109 failures/h.Similarly low failure rates are required in flight-control systems, train-signalingsystems, and nuclear power plant monitoring systems.Certification. In many cases the design of a safety-critical real-time system must beapproved by an independent certification agency.
The certification process can besimplified if the certification agency can be convinced that:1. The subsystems that are critical for the safe operation of the system are protectedby fault-containment mechanisms that eliminate the possibility of error propagation from the rest of the system into these safety-critical subsystems.2. From the point of view of design, all scenarios that are covered by the givenload- and fault-hypothesis can be handled according to the specification withoutreference to probabilistic arguments. This makes a resource adequate designnecessary.3.
The architecture supports a constructive modular certification process where thecertification of subsystems can be done independently of each other. At thesystem level, only the emergent properties must be validated.[Joh92] specifies the required properties for a system that is designed forvalidation:1. A complete and accurate reliability model can be constructed. All parameters of themodel that cannot be deduced analytically must be measurable in feasible timeunder test.121 The Real-Time Environment2. The reliability model does not include state transitions representing designfaults; analytical arguments must be presented to show that design faults willnot cause system failure.3.
Design tradeoffs are made in favor of designs that minimize the number ofparameters that must be measured (see Sect. 2.2.1).1.4.3MaintainabilityMaintainability is a measure of the time interval required to repair a system after theoccurrence of a benign failure.