Computational Thinking - Учебное пособие, страница 4
Описание файла
PDF-файл из архива "Computational Thinking - Учебное пособие", который расположен в категории "". Всё это находится в предмете "английский язык" из 9 семестр (1 семестр магистратуры), которые можно найти в файловом архиве МГУ им. Ломоносова. Не смотря на прямую связь этого архива с МГУ им. Ломоносова, его также можно найти и в других разделах. .
Просмотр PDF-файла онлайн
Текст 4 страницы из PDF
If there is no time-delay, these algorithms usuallycollapse to the PID form. Predictive controllers can also beembedded within an adaptive framework.Multivariable ControlMost processes require the monitoring of more than onevariable. Controller-loop interaction exists such that theaction of one controller affects other loops in a multi-loopsystem.
Depending upon the interrelationship of the processvariables, tuning each loop for maximum performance may20result in system instability when operating in a closed-loopmode. These types of controllers are not designed to handlethe effects of loop interactions. A multivariable controller,whether Multiple Input Single Output (MISO) or MultipleInput Multiple Output (MIMO) is used for systems that havethese types of interactions. A model-based controller can bemodified to accommodate multivariable systems. Loopinteractions are considered as feed-forward disturbances andare included in the model description.
Following SISOdesigns, multivariable controllers that can provide time-delaycompensation and handle process constraints can also bedevelopedwithrelativeease.Model-Based Predictive ControlModel-Based Predictive Control technology utilizes amathematical model representation of the process.
Thealgorithm evaluates multiple process inputs, predicts thedirection of the desired control variable, and manipulates theoutput to minimize the difference between target and actualvariables. Strategies can be implemented in which multiplecontrol variables can be manipulated and the dynamics of themodels are changed in real time.Dynamic Matrix ControlDynamic Matrix ControAl (DMC) is also a popular modelbased control algorithm. A process model is stored in a matrixof step or impulse response coefficients. This model is used inparallel with the on-line process in order to predict futureoutput values based on the past inputs and currentmeasurements.Statistical Process ControlStatistical Process Control (SPC) provides the ability todetermine if a process is stable over time, or, conversely, if itis likely that the process has been influenced by "specialcauses" which disrupt the process.
Statistical Control Chartsare used to provide an operational definition of a "specialcause" for a given process, using process data. SPC has beentraditionally achieved by successive plotting and comparing a21statistical measure of the variable with some user definedcontrol limits. "On-line SPC" is the integration of automaticfeedback control and SPC techniques. Statistical models areused not only to define control limits, but also to developcontrol laws that suggest the degree of manipulation tomaintain the process under statistical control. This techniqueis designed specifically for continuous systems.
Manipulationsare made only when necessary, as indicated by detectingviolation of control limits. As a result, savings in the use ofraw materials and utilities can be achieved using on-line SPC.Neural Network-based ControlThe model predictive control method (MPC) is primarilydeveloped for process control using artificial neural networks.Conventional MPC uses linear model of the system forprediction which leads to inaccuracy for highly non-linearsystems, such as robots. In recent years, the requirements forthe quality autonomic control in the process industriesincreased significantly due to the increased complexity of theplants and sharper specifications of product quality.
At thesame time, the available computing power increased to a veryhigh level. As a result, computer models that arecomputationally expensive became applicable even to rathercomplex problems. Model-based complex techniques weredeveloped to obtain tighter control.Model predictive control was introduced successfully inseveral industrial plants. An important advantage of thesecontrol schemes is the ability to handle constraints of actuatedvariables and internal variables.
In most applications of modelpredictive techniques, linear model is used to predict processbehavior over the horizon of interest. As most real processesshow a nonlinear behavior, some work was done to extendpredictive control techniques to incorporate nonlinear models.The most expensive part of the realization of a nonlinearpredictive control scheme is the derivation of themathematical model. In many cases it is even impossible toobtain a suitable physically founded process model due to the22complexity of the underlying process or the lack of knowledgeof critical parameters (as temperatureand pressure-dependentmass transfer coefficients or viscosities) of models. Apromising way to overcome these problems is to use neuralnetworks as nonlinear black-box models of the dynamicbehavior of the process.Such neural network can be derived from measuredinput/output data of the plant. Usually, special open-loopexperiments are performed to provide the data to train neuralnets.
In many practical cases, however, conventionalcontrollers are in use at the plant which stabilize the plantand provide some basic, sometimes sluggish control.Measurements of input/output variables of the plant operatedwith the linear controller may provide very good training datafor the neural network. This approach is more practical (theplant is always under automatic control) and more effectivethan using experiments without control (open-loopidentification).The Neural Network and Training Algorithm TopologyFor the prediction of the behavior of the neutralizationreactor, we chose a feedforward network with sigmoidactivation functions. This class of neural networks is wellknown and relatively well understood.
Feedforward nets areeasilyimplementedunderreal-timeconditions.Adisadvantage is that the training effort is usually high, whichmakes it difficult and time-consuming to explore variousstructures and to optimize the network structure. Weovercame this problem, to a certain degree, by improving thetraining algorithms and by using several PCs in parallel in thetraining process.
Feedforward nets with at least one hiddenlayer have the capability to approximate any desired nonlinearmapping to an arbitrary degree of accuracy. The neural netconsidered here consists of four layers: one input layer, twohidden layers, and one output layer. Even though networkswith only one hidden layer already have the desiredapproximation power, our experience is that two hidden layers23give better convergence in the training process.As inputs the actual and the last four old pH-values and thecorresponding five values of the impulse frequency (whichdetermines the sodium hydroxide flow) are fed into thenetwork.
The hidden layers consist of 10 neurons each, whilethe output layer consists of one neuron, the predicted nextvalue of pH. The network thus performs a one-step aheadprediction. In the predictive control scheme, however, it isused for a multi-step prediction by applying it recursively, i.e.past values of pH are replaced by predicted values. To use theneural net in this fashion requires a very good one-step-aheadprediction accuracy.
The network topology was chosen basedon experiments with different structures. It is obvious that thenet is redundant in the sense that from system-theoreticconsiderations, two past inputs should be sufficient becausethe order of the physical system is 2, at least in a firstapproximation, if the sensor dynamics are included. Theresults, however, were much better with more delayed inputscorresponding to a distribution of the information to morenodes than necessary. The same applies to the number ofnodes in the hidden layers.
The structure chosen is notminimal (and there is not much point in squeezing it to thelimit), but the one that gave the best compromise in terms ofrobust prediction vs. training effort. The prediction error isnot very sensitive to the number of neurons in the hiddenlayer.Training AlgorithmIn order to make the neural network perform the desiredtime-consuming to explore various structures and to optimizethe mapping from the input layer to the output layer, oneusually searches for the optimal connection weights w,,between the neurons to approximate the desired mapping byso-called training algorithms. The most popular trainingalgorithm for feedforward networks with sigmoid activationfunctions is the generalized delta-rule or backpropagationwhere E is the sum of the squares of the differences between24network outputs and the desired outputs (targets) L for theset of R training patterns.
As the backpropagation algorithm isa steepest descent method, it has the disadvantage ofconverging very slowly and being vulnerable to getting caughtin local minima of E. To overcome these disadvantages, a socalled momentum term can be included to slide over smallminima in the error surface: For further acceleration, the stepsize K can be chosen individually for each weight in the netand can also be adapted according to the learning progress.All these improvements result in a significant speed-up of thelearning process, but there is still a tendency to be caught inlocal minima.Also, the convergence properties of the algorithm are stronglydependent on the initial settings of the weights Thebackpropagation-based learning algorithm described abovevaries only the weights of the neural network to achieve thedesired mapping.