Real-Time Systems. Design Principles for Distributed Embedded Applications. Herman Kopetz. Second Edition (811374), страница 54
Текст из файла (страница 54)
This increase in the amount of schedulingoverhead further decreases the computational resources that are available forthe application tasks.Similar arguments hold for the overhead required for queue management.A successful technique to avoid thrashing in explicit flow-control schemes is tomonitor the resource requirements of the system continuously and to exercise astringent backpressure flow control at the system boundaries as soon as a decreasein the throughput is observed.Example: If too many users try to establish a telephone connection and thereby overload aswitch, the switch exercises backpressure by presenting a busy signal to the users.Remember that in a real-time system, such a backpressure flow control mechanismsis not always possible.Example: Consider a monitoring and control system for an electric power grid.
There maybe more than 100,000 different RT entities and alarms that must be monitored continually.In the case of a rare event, such as a severe thunderstorm when a number of lightning strikeshit the power lines within a short interval of time, many correlated alarms will occur. Thecomputer system cannot exercise explicit flow control over these alarms in case the systementers the thrashing zone.
It follows the design must be capable to handle the simultaneousoccurrence of 100,000 different alarms.1787.37 Real-Time CommunicationEvent-Triggered CommunicationFigure 7.5 depicts event-triggered communication at the architectural level.A sender sends a message whenever a significant event (e.g., termination of atask, an interrupt signal, etc.) occurs at the sender. This message is placed in aqueue at the sender’s site until the basic message transport service (BMTS) is readyto transport the message to the receiver.
The communication channel can be eventriggered, rate-constrained, or time-triggered. After arrival of the message at thereceiver, the message is placed in a receiver queue until the receiver consumesthe message. Using the CRC field contained in every message, the BMTS checks atthe receiver’s site whether the contents of a message have been corrupted duringtransport and simply discards corrupted messages. From the architectural point ofview, a BMTS is characterized by a maximum bandwidth, a transport latency, ajitter, and a reliability for the transport of correct BMTS messages. These transportparameters can be characterized by probability distributions.Whenever queues are involved in a scenario, the possibility of queue overflowmust be considered.
Queue overflow will occur if the transmission rate of the senderis larger than the capacity of the network (overflow of the sender’s queue) or thedelivery rate of the network is larger than the reception rate at the receiver(overflow of the receiver’s queue). Different event-trigged protocols take differentapproaches to the handling of queue overflow.It is impossible to provide temporal guarantees in an open event-triggeredcommunication scenario. If every sending component in an open communicationscenario is autonomous and is allowed to start sending a message at any instant,then it can happen that all sending components send a message to the same receiverat the same instant (the critical instant), thus overloading the channel to thereceiver.
In fielded communication systems, we find three strategies to handlesuch a scenario: (1) the communication system stores messages intermediately ata buffer before the receiver, (2) the communication system exerts backpressure onthe sender, or (3) or the communication system discards some messages. None ofthese strategies is acceptable for real-time data.A lower-level protocol, e.g., a link-level protocol that increases the reliability ofa link at the cost of additional jitter, is not directly visible at the BMTS level –although its effects, the increased reliability and the increased jitter, are reflected inthe characterization of the BMTS service. When a BMTS message has been sentFig.
7.5 Event-triggered communication7.3 Event-Triggered Communication179across the Internet, we don’t know what types and how many different low-levelprotocols have been activated.In Sect. 4.3.3 an exactly-once semantics has been demanded for the transmissionof event information. The implementation of an exactly-once semantics requires abi-directional information flow between sender and receiver that is not provided atthe BMTS level of our model. In our model, the exactly-once semantics must beimplemented by a higher-level protocol that uses two or more (from the point ofview of the communication service independent) BMTS messages.7.3.1EthernetEthernet is the most widely used protocol in the non-real-time world.
The originalbus-based Ethernet, controlled by the CSMA/CD (carrier sense multiple access/collision detection) with exponential back-off access control strategy [Met76] has,over the years, morphed into a switched Ethernet configuration with star topology,standardized in IEEE standard 802.3. An Ethernet switch deploys a best-effort flowcontrol strategy with a buffer before the link to the final receiver. If this bufferoverflows, further messages to this receiver are discarded. If an exactly-oncesemantics must be implemented in an Ethernet system, a higher level protocolthat uses two or more Ethernet messages must be provided. An extension ofstandard Ethernet, time-triggered (TT) Ethernet that supports a deterministic message transport is described in Sect.
7.5.2.7.3.2Controller Area NetworkThe CAN (Controller Area Network) Protocol developed by Bosch [CAN90]is a bus-based CSMA/CA (carrier sense multiple access/collision avoidance) protocol that exercises backpressure flow control on the sender. The CAN messageconsists of six fields as depicted in Fig. 7.6. The first field is a 32-bit arbitrationfield that contains the message identifier of 29 bits length. (The original CAN hadonly a arbitration field of 11 bits, supporting at most 2,024 different messageidentifiers.) Then there is a 6-bit control field followed by a data field of between0–64 bits in length.
The data in the first three fields are protected by a 16-bit CRCfield that ensures a Hamming distance of 6. The fields after the CRC are used for animmediate acknowledgment message.Field Arbitration ControlBits326Fig. 7.6 Data format of a CAN messageData Field0-64CRC A EOF16271807 Real-Time CommunicationIn CAN, access to the CAN-bus is controlled by an arbitration logic that assumesthe existence of a recessive and a dominant state on the bus such that the dominantstate can overwrite the recessive state.
This requires that the propagation delay ofthe channel is smaller than the length of a bit-cell of the CAN message (seeTable 7.1). Assume that a 0 is coded into the dominant state and a 1 is coded intothe recessive state. Whenever a node intends to send a message, it puts the first bitof the arbitration field (i.e., the message identifier) on the channel. In case of aconflict, the node with a 0 in its first identifier bit wins, and the one with a 1 mustback off.
This arbitration continues for all bits of the arbitration field. A node withall 0 s always wins – this is the bit pattern of the highest priority message. In CAN,the message identifier thus determines the message priority.7.3.3User Datagram ProtocolThe user datagram protocol (UDP) is the stateless datagram protocol of the Internetprotocol suite. It is an efficient unreliable uni-directional message protocol thatrequires no set-up of transmission channels and supports multi-casting on a localarea network using a best-effort flow-control strategy. Many real-time applicationsuse UDP because the tradeoff between latency and reliability is not hard-wired inUDP (in contrast to the Transmission Control Protocol TCP) but can be performedat the application level, taking account of the application semantics.
UDP is alsoused for multimedia streaming applications.7.4Rate-Constrained CommunicationIn rate-constrained communication, a minimum guaranteed bandwidth is established for each channel. For this minimum bandwidth, the maximum transportlatency and the maximum jitter are guaranteed to be smaller than an upper bound.If a sender (an end-system) sends more messages than the minimum guaranteedbandwidth, the communication system will try to transport the messages accordingto a best-effort strategy. If it cannot handle the traffic, it will exercise backpressureflow control on the sender in order to protect the communication system fromoverload generated by a misbehaving sender (e.g., a babbling end system).In order to be able to provide the guarantees, the communication system mustcontain information about the guaranteed bandwidth for each sender.
This information can be contained in static protocol parameters that are pre-configured into thecommunication controller a priori or can be loaded into the communication controller dynamically during run-time. Rate constrained communication protocolsprovide temporal error detection and protection of the communication systemfrom babbling idiots.7.4 Rate-Constrained CommunicationTable 7.2 Transport latency of AFDX on an A 380 configuration [Mil04]Latency (ms)0–0.50.5–11–22–33–4Percent of traffic2%7%12%16%18%1814–838%8–137%Rate-constrained protocols provide a guaranteed maximum transport latency.The actual transport latency will normally be significantly better (see Table 7.2),since under normal conditions the global traffic pattern is much smaller than theassumed peak.