Int J Adv Manuf Technol (2012) 62:183–203 DOI 10.1007/s00170-011-3810-8 ORIGINAL ARTICLE Nonlinear constrained optimal control problem: a PSO–GA-based discrete augmented Lagrangian approach M. A. Rahim · Haris M. Khalid · Amar Khoukhi Received: 28 July 2011 / Accepted: 22 November 2011 / Published online: 3 January 2012 © Springer-Verlag London Limited 2011 Abstract This work deals with the optimal control problem which has been proposed to solve using the discrete augmented Lagrangian-based nonlinear pro- gramming approach. It is shown that this technique guarantees a satisfactory performance by minimizing the energy and maximizing the output. Further, a hy- brid particle swarm and genetic algorithm-based ap- proach is used to achieve the optimal value of Lagrange multipliers and required parameters. The designed scheme has been successfully tested through extensive simulation. The successful use of the proposed scheme encourages to other physical systems. The proposed scheme is evaluated extensively on a laboratory-scale coupled-tank system. Keywords Optimal control · Augmented Lagrangian · Nonlinear programming · Particle swarm optimization · Genetic algorithm · Control of two-tank system 1 Introduction Many processes in chemical and biotechnological in- dustries are described by multiple sets of differential M. A. Rahim Faculty of Business Administration, University of New Brunswick, Fredericton, NB, E3B 5A3 Canada e-mail:
[email protected] H. M. Khalid (B) · A. Khoukhi Department of Systems Engineering, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia e-mail:
[email protected] and algebraic equations. As such they are difficult to control and optimize during transition between different set-points. The switches can involve different regimes of operation (occurrence of flooding in distil- lation columns, explosive areas in mixtures of gases, etc.) or external actions (addition of second reactor when production increases, etc.). These are dynamic nonlinear optimal control problems. In theoretical and numerical research, optimal control problems have been either treated as cone-constrained optimization problems in functional spaces or studied using some specialized tools. In the first approach, problems of optimal control are placed in a broader framework of optimization problems, and general techniques can be used to solve them, whereas the second approach al- lows us to take maximal advantage of the specific struc- ture of the problems. Such a situation takes place also in applications of the methods involving Lagrangian for solving numerically optimal control problems. The paper is organized as follows: In Section 2, the related work is presented, and the optimal control prob- lem statement is considered in Section 3. Section 4 gives the system model and description, and Section 5 discusses the implementation and simulation results for all the techniques implemented. Section 6 gives the comparison of results between iterative method and genetic algorithm. Section 7 gives the surface plots for analysis. Finally, some conclusions are given in Section 8. 2 Related works There are several approaches to solution of such a dynamic optimization problem. If the process to be optimized can be described accurately enough by 184 Int J Adv Manuf Technol (2012) 62:183–203 piece-wise linear and logic formulation, powerful algo- rithms can be developed in the area of explicit model predictive control as can be seen in [1]. If fully non- linear processes are concerned, original dynamic op- timization problem has to be approximated by some simplified formulation. The usual approaches are com- plete discretization of state and control variables— orthogonal collocation. Such formulation can be found in [2]. Other possibility is to leave the states intact and approximate only the control variables as piece-wise constant, or with some higher-order approximations. This approach is known as control vector parametriza- tion. Here different formulations can be found, de- pending on how gradients of the resulting nonlinear programming problem (NLP) are calculated. In [3, 4], system of sensitivity equations is formed, and the gra- dients are calculated from its solution. The advantage of this method is easy formulation of the problem and forward integration of both states and sensitivity equations. The drawback of this method lies in a large system of differential equations as each optimized pa- rameter generates a set of differential equations with the same dimension as the number of states of the op- timized process. Another possibility, which is pursued in this work, is to calculate the gradients of NLP via optimal control theory using the so-called co-state, or adjoint equations [5]. The advantage is that the num- ber of differential equations is not only proportional to the number of optimized parameters but also to the number of constraints. On the other side, adjoint equations have to be solved in opposite direction of time which makes the implementation more difficult. When dealing with processes comprised of a large number of state equations and only a small number of state-dependent constraints, this approach has favor- able properties compared to calculation of sensitivities. The classical Lagrange–Newton method [6], one of the most efficient numerical methods of solving opti- mization problems, was developed for problems with equality-type constraints. In this method, the Newton procedure is applied to the first-order optimality sys- tem, which has the form of a system of equations. In the case of inequality-type constraints, the first-order optimality system cannot be expressed as an equation. However, it can be expressed as an inclusion, or the so-called generalized equation [7]. It was shown by Robinson [7] that a Newton-type procedure applied to this general equation is locally quadratically convergent to the solution, provided that a property called strong regularity is satisfied. This approach has been success- fully applied to a class of nonlinear cone-constrained optimization problems in infinite dimensional spaces [8–10] and optimal control problems subject to con- trol and/or state constraints [11]. A considerable work has also been done in optimal dynamic modeling and optimal time-energy off-line programming [12–14]. Hy- bridizing the classical optimization technique with evo- lutionary algorithms can be a good proposed option. It should be noted that particle swarm optimization– genetic algorithm (PSO–GA)-based NLP-constrained optimization is applied for the optimal control of the system. In the most recent years, various meta- heuristic techniques such as genetic algorithm and par- ticle swarm optimization are extensively used to solve complex real-world problems. GAs are a special type of evolutionary algorithms, algorithms that simulate biological processes to solve search and optimization problems. Given a specific problem, potential solutions are typically encoded as bit strings, constituting a population. The bit strings are allowed to reproduce on the basis of their fitness, thus forming a new population. Iterating this process, the population evolves according to a “natural selection and survival of the fittest” process similar to the one described by Charles Darwin in the Origin of Species. If the GA is implemented successfully, the final pop- ulation should consist of maximally fit it individuals, approximating the sought optimal solution. GAs have been implemented for a wide variety of problems, both real-world (e.g., fault diagnosis, fault tolerant) and abstract (e.g., solving NP-complete problems [15]). The bulk of the GA literature is concerned with prac- tical applications. For a very complete bibliography, see [16, 17], which contains a comprehensive survey. Moreover, genetic algorithm developed by Goldberg [16, 17] is a stochastic global search and optimization method that mimics the process of natural biological evolution. Indeed, genetic algorithms are inspired by Darwin’s evolution. The basic steps of a GA are as follows: Step 1 Coding of variable using suitable representation Step 2 Random generation of an initial population of chromosomes Step 3 Evaluation of fitness function of each individ- ual in the population Step 4 Stop if stopping condition has been achieved or else go to step 5 Step 5 Perform reproduction, crossover, and mutation Step 6 Formation of new generation from the individ- uals obtained from step 5 Step 7 Go to step 3 PSO has attracted much attention among re- searchers and has been used to solve complex optimiza- tion problems with wide applications in different fields. It was invented by Kennedy and Eberhart [18–20] Int J Adv Manuf Technol (2012) 62:183–203 185 while attempting to simulate the choreographed, graceful motion of swarms of birds as part of a so- ciocognitive study investigating the notion of collec- tive intelligence in biological populations. The basic PSO algorithm consists of three steps [21, 22], namely evaluating fitness of each particle, update individual and global bests, update velocity, and finally update position that changes its position from one move or iteration to another based on velocity updates. PSO is initialized with a group of random particles (solutions) with initial positions of Ski and velocity V k i using Eqs. 1 and 2. Ski = Smin + rand × (Smax − Smin) (1) Vki = Vmin + rand × (Vmax − Vmin) (2) where Ski is the initial position of the ith particle and kth iteration, Smin and Smax are the upper and lower bounds on the design variables values, and rand is a uniformly distributed random variable that can take any value between 0 and 1. Then, PSO searches for optimal solution by updating generations. Each particle is updated by means of two “best” values, namely “pbest” and “gbest” in successive iteration. The pbest is the best solution (fitness) it has achieved so far. The “gbest” is the best value obtained so far by any particle in the population. After finding the two best values, the particle updates its velocity and positions by Eqs. 3 and 4. Vk+1i = w × Vki + c1 × rand1 × (pbesti − Ski ) + c2 × rand2 × (gbesti − Ski ) (3) Sk+1i = Ski + Vk+1i (4) where Vk+1i is the velocity of (k + 1)th iteration of the ith individual, Vki is the velocity of kth iteration of ith individual, w is the inertial weight, and c1 and c2 are the self- and swarm confidence factors. Rand1 and Rand2 are the random numbers selected between 0 and 1. pbesti is the best position of ith individual; gbesti is the best position among the individual. The above procedure is repeated until a criterion is met, usually a sufficiently good fitness value or a maximum number of generations. In this paper, the main aim is to show a detailed derivation of the dynamic-augmented Lagrangian- based optimization based on optimal control theory. The results obtained then are presented on an example of bench-marked laboratory scaled two-tank level con- trol that exhibits nonlinear dynamics, which is the most used prototype applied in the wastewater treatment plant, the petrochemical plant, and the oil/gas systems. PSO–GA-based optimization for the optimal control of augmented Lagrangian functions with co-state updates is then being made which is the main contribution of this paper. It may be noted that PSO–GA algorithms are not hybridized in the proposed scheme. The optimal results of GA (cost function, optimal heights, and opti- mal voltage value) are calculated first, and the optimal results collected from GA are then inserted as an input to the PSO algorithm for the best optimization. 3 Problem statement Complex optimal control problems require a serious attention for surviving smartly in the process industries. Fig. 1 Implementation plan for the evaluation of the proposed scheme 186 Int J Adv Manuf Technol (2012) 62:183–203 Many units in process industries are described by mul- tiple sets of differential and algebraic equations. Many complex control tasks, for example those associated with controlling an autonomous vehicle to carry out a sequence of maneuvers or with controlling a col- lection of interacting process units in a process plant, involve two levels of decision making. At the maximum level, it is necessary to achieve a certain maximum output whereas at the same time, at the minimum, it is required to minimize the certain possibility of fault. Hybrid control addresses the problem of integrating the minimization and maximization for decision making in this context. The early optimal control of mission critical systems becomes highly crucial for preventing failure of equipment, loss of productivity and profits, management of assets, and reduction of shutdowns. The proposed scheme has been evaluated on the above-cited process control system. The scheme is car- ried out by jointly interpreting model outputs. The implementation plan for the proposed scheme is shown in Fig. 1. 4 System model and description 4.1 Experimental setup and process data collection The benchmarked laboratory-scale two-tank process control system has been used to collect data at a sam- pling rate of 50 ms. Process data have been generated through an experimental setup as shown in Fig. 2. The main objective of the bench-marked dual-tank system is to reach a reference height of 200 ml of the second tank. During this process, several faults have been in- troduced such as the leakage faults, sensor faults, and actuator faults. Leakage faults have been introduced through the pipe clogs of the system, knobs between the first and the second tank, etc. Sensor faults have been introduced by introducing a gain in the circuit as if there is a fault in the level sensor of the tank. Actuator faults have been introduced by introducing a gain in the setup for the actuator that comprises of the motor and pump. A proportional integral (PI) controller works in a closed loop configuration to reach the desired height of the second tank. Due to the inclusion of faults, the controller was finding it difficult to reach the desired level. For this reason, the power of the motor has been increased from a scale of 0–5 V to scale of 5–18 V in order to provide its maximum throttle to reach the desired level. In doing so, the actuator performed well in achieving its desired level, but it also suppressed the faults of the system. So, it made the task of detecting the faults even more difficult. After the collection of data, Fig. 2 a The two-tank system interfaced with the Labview through a DAQ and the amplifier for the magnified voltage. b The labview setup of the apparatus including the circuit window and the block diagram of the experiment techniques such as settling time, steady-state value, and coherence spectra can help us to give an insight of the fault. In this paper, in particular, leakage fault has been considered. Hydraulic height and liquid output flow rate of the second tank are the inputs while leakage fault level on a discrete scale of 1–4 is the considered output. Data are collected by introducing leakage fault in the closed loop system. 4.2 Model of the coupled tank system The physical system under evaluation is formed of two tanks connected by a pipe. The leakage is simulated in the tank by opening the drain valve. A DC motor- driven pump supplies the fluid to the first tank, and a PI controller is used to control the fluid level in the second tank by maintaining the level at a specified level, as shown in Fig. 3. A step input is applied to the DC motor-pump system to fill the first tank. The opening of the drainage valve introduces a leakage in the tank. Various types of leakage faults are introduced, and the liquid height in the second tank, H2, and the inflow rate, Qi, are both measured. The National Instruments LABVIEW package is employed to collect these data. Int J Adv Manuf Technol (2012) 62:183–203 187 Fig. 3 Process control system: a lab-scale two-tank system A benchmark model of a cascade connection of a DC motor and a pump relating the input to the motor, u, and the flow, Qi, is a first-order system (see Eq. 5): Q˙i = −am Qi + b mφ(u) (5) where am and b m are the parameters of the motor- pump system and φ(u) is a dead-band and saturation type of nonlinearity. It is assumed that the leakage Q� occurs in tank 1 and is given by (see Eq. 6): Q� = Cd� √ 2gH1 (6) with the inclusion of the leakage, the liquid level system is modeled by (see Eq. 7): A1 dH1 dt = Qi − C12ϕ (H1 − H2) − C�ϕ (H1) (7) A2 dH2 dt = C12ϕ (H1 − H2) − C0ϕ (H2) (8) where ϕ(.) = sign(.)√2g(.), Q� = C�ϕ (H1) is the leak- age flow rate, Q0 = C0ϕ (H2) is the output flow rate, H1 is the height of the liquid in tank 1, H2 is the height of the liquid in tank 2, A1 and A2 are the cross-sectional areas of the 2 tanks, g = 980 cm/s2 is the gravitational constant, C12 and Co are the discharge coefficient of the inter-tank and output valves, respectively. The model of the two-tank fluid control system, shown above in Fig. 3, is of a second order and is nonlin- ear with a smooth square root type of nonlinearity. For design purposes, a linearized model of the fluid system is required and is given below in Eqs. 9 and 10: dh1 dt = b 1qi − (a1 + α) h1 + a1h2 (9) dh2 dt = a2h1 − (a2 − β) h2 (10) where h1 and h2 are the increments in the nominal (leakage-free) heights H01 and H 0 2 (see Eq. 11): b 1 = 1A1 , a1 = Cdb 2 √ 2g(H01 − H02) , β = C0 2 √ 2gH02 , a2 = a1 + Cdo 2 √ 2gH02 α = Cd� 2 √ 2gH01 and the parameter α indicates the amount of leakage. A PI controller, with gains kp and kI, is used to maintain the level of the tank 2 at the desired reference input r as (see Eq. 11): x˙3 = e = r − h2 u = kpe + kIx3 (11) The linearized model of the entire system formed by the motor, pump, and the tanks is given by (see Eq. 12): x˙ = Ax + Br y = Cx (12) where x = ⎡ ⎢⎢ ⎣ h1 h2 x3 qi ⎤ ⎥⎥ ⎦ , A = ⎡ ⎢⎢ ⎣ −a1 − α a1 0 b 1 a2 −a2 − β 0 0 −1 0 0 0 −b mkp 0 b mkI −am ⎤ ⎥⎥ ⎦ , B = [0 0 1 b mkp ]T , C = [1 0 0 0] (13) where qi, q�, q0, h1, and h2 are the increments in Qi, Q�, Q0, H01 , and H 0 2 , respectively; the parameters a1 and a2 are associated with linearization whereas the parameters α and β are, respectively, associated with the leakage and the output flow rate, i.e., q� = αh1, qo = βh2. 188 Int J Adv Manuf Technol (2012) 62:183–203 Fig. 4 sign(.) profile for a certain set of numbers Proposition During the implementation process, sign(.) can be approximated with arc tangent. A relationship for approximation can be as follows: sign(x) = arctan( x√ 1 − x × x ), where x < 1. Likewise, after approximation, the profiles can be as follows (see Figs. 4 and 5): where (a) representing the sign(.) profile and (b) representing the arctangent profile for a certain set of numbers. Note: The scale in the respective plots of sign(.) and arctangent are approximated at different scales, as arctan(.) function gives the approximation at an Fig. 5 Arctangent profile for a certain set of numbers expanded range of x-axis, where x-axis represents the input. 5 Implementation and simulation results The tasks of our optimal control scheme are executed with an increasing precision accompanied with a more detailed optimality picture. The collected data are first pre-processed and normalized. Kalman filter is then used to obtain sets of feasible solutions. States are then estimated using Lagrange multipliers. Penalty parame- ter estimates are also updated followed by iterative aggregated updates. To have a cross-optimal solution, the functions have been optimized using PSO–GA- based approach. The flowchart of the implementation can be seen in the Fig. 6. 5.1 Kalman filter-based feasible solution for optimal control The Kalman filter is a copy of the fault-free (or nomi- nal) state-space model of the system denoted (A0, B0, C0) driven by the residual e(k), which is the error be- tween the system output y(k) and its estimate yˆ(k), i.e., the residual is e(k) = y(k) − yˆ(k). The Kalman filter becomes (see Eqs. 14–16): xˆ(k + 1) = A0 xˆ(k) + B0u(k) + K0(y(k) − C0 xˆ(k)) (14) yˆ(k) = C0 xˆ(k) (15) e(k) = y(k) − C0 xˆ(k) (16) where K0 is n × 1 vector termed Kalman gain and xˆ(k) is the estimates of the state x(k). The Kalman filter estimates the states by fusing the information provided by the measurement y(k) and the a priori information contained in the model (A0, B0, C0). This fusion is based on the a priori information of the plant and the measurement noise covariances, Q and R, respectively. When Q is small, implying that the model is accurate, the state estimate is obtained by weighting the plant model more than the measurement one. The Kalman gain K0 will then be small. On the other hand, when R is small implying that the measurement model is accurate, the state estimate is then obtained by weight- ing the measurement model more than the plant one. The Kalman gain K0 will then be large in this case. The larger K0 is, the faster the response of the filter will be but the larger the variance of the estimation error becomes. Thus, there is a trade-off between a fast filter response and a small covariance of the residual. An adaptive on-line scheme is employed to tweak the a priori choice of the covariance matrices so that an Int J Adv Manuf Technol (2012) 62:183–203 189 Fig. 6 Flowchart for the implementation of the proposed scheme acceptable trade-off between the Kalman filter perfor- mance and the covariance of the residual is reached. The initial feasible solution for the Kalman filter filter can be shown in Fig. 7 where height profile is shown in the upper window and the Kalman filter residual analysis is shown in the lower window. Remark 1 The experiment data collected from the sys- tem are nonlinear. It is being fed then into the steady- state Kalman filter. As the steady-state basic Kalman filter is being used for getting the feasible solution, the model collected has been linearized. Nonlinear model could have been used in case of extended or unscented Kalman filter in particular to capture the nonlinearities and upgrade the covariance at every iteration. 5.2 Cost function of the system Lp = −12 H 2 1(t) − 1 2 H22(t) + 1 2 u2 (17) Fig. 7 Kalman filter residual analysis considered as feasible solution 190 Int J Adv Manuf Technol (2012) 62:183–203 Note: The cost function is selected considering our requirement of maximum output (i.e., hydraulic height of the tanks) with minimum energy (i.e., the voltage input) (see Eq. 17). Constraints: H1(t + 1) = H1(t) + [ 1 A1 (Qi − C12 arctan(H1 − H2) × √2g(H1 − H2) − Cl arctan(H1) √ 2gH1 )] × Ts (18) H2(t + 1) = H2(t) + [ 1 A2 ( C12 arctan(H1 − H2) × √2g(H1 − H2) − C0 arctan(H2) √ 2gH2 ) ] × Ts (19) Qi(t + 1) = Qi(t) + [ b m arctan(u) √ 2gu(t)−amQi(t) ] ×Ts (20) QL − Cdl √ 2gH1 = 0 (21) e − r + H2 = 0 (22) u − kpe − kIx3 = 0 (23) Bounds: • 0 ≤ Qi ≤ 200 • 0 ≤ Q0 ≤ 200 • 0 ≤ H1 ≤ 200 • 50 ≤ H2 ≤ 200 • 5 ≤ u ≤ 18 • 0.25 ≤ C0 ≤ 1.0 • 0.25 ≤ C1 ≤ 1.0 • 0 ≤ am ≤ 18 • 0 ≤ b m ≤ 18 (24) 5.3 Augmented Lagrangian method for optimal control 5.3.1 Lagrangian algorithmic framework of optimal control The augmented Lagrangian multiplier method com- bines the Lagrange multiplier and the penalty function methods. The augmented Lagrangian function is given as (see Eq. 25): A(X, α, rk) = f (X) + p∑ j=1 α jh j(X) + rk p∑ j=1 h2j(X) (25) where rk is the penalty parameter. It can be noted that the function A reduces to the Lagrangian if rk = 0. Given μ0 > 0, x0 and λ0 and final tolerance tolerance, For k = 0, 1, 2, . . . . . . . Starting with xk minimize LA(x, λk, μk) terminating when (see Eq. 26): ‖ ∇LA(x, λk;μk) ‖≤ τk (26) Call the result x∗k If final convergence test holds (e.g., τk ≤ tolerance) exist Fig. 8 Flowchart for the augmented Lagrangian multiplier algo- rithm method for optimal control Int J Adv Manuf Technol (2012) 62:183–203 191 Else λk+1 = λk − μ−1k c(x∗k) (27) Choose μk+1 ∈ (0, μk) Choose a new starting point xk+1, e.g., xk+1 = xkx∗. End End This algorithmic framework allows a gentler decrease in μ. The flowchart of the algorithmic framework can be seen in Fig. 8. 5.4 Augmented Lagrangian implementation in optimal control The augmented Lagrangian implementation can be im- plemented while considering Lp as the main function and α as Lagrange multipliers and rk as augmented Lagrangian multiplier. In order to cater for the con- straints in this minimization, we need to allocate them the Lagrange multipliers α and augmented Lagrangian multiplier rk, where αi ≥ 0 ∀i and rk ≥ 0, respectively (see Eq. 28): Lp = −12 H1(t) 2 − 1 2 H2(t)2 + 12u(t) 2 − a1(t) × [ H1(t + 1) − H1(t) − [ 1 A1 ( Qi − C12 arctan (H1 − H2) √ 2g(H1 − H2) − Cl arctan(H1) √ 2gH1 )] × Ts ] − rk × [ H1(t + 1) − [ H1(t) − ( 1 A1 ( Qi − C12 arctan(H1 − H2) √ 2g(H1 − H2) ) − Cl arctan(H1) √ 2gH1 )] × Ts ]2 − a2(k) × [ H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 )] × Ts ] − rk × [ H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 )] × Ts ]2 − a3(k) × [ Qi(t + 1) − Qi(t) − [ b m arctan(u) √ 2gu(t) − am Qi(t) ] × Ts ] − rk × [ Qi(t + 1) − Qi(t) − [ b m arctan(u) √ 2gu(t) − am Qi(t) ] × Ts ]2 − a4(k) × [ QL − Cdl √ 2gH1 ] − rk [ QL − Cdl √ 2gH1 ]2 − a5(k) × [e − r + H2] − rk × [e − r + H2]2 − a6(k) × [ u − kpe − kIx3 ] − rk × [u − kpe − kIx3]2 − hn1 × [Qi(t)] − rk × [Qi(t)]2 − hn2 × [200 − Qi(t)] − rk × [200 − Qi(t)]2 − hn3 × [Q0(t)] − rk × [Q0(t)]2 − hn4 × [200 − Q0(t)] − rk × [200 − Q0(t)]2 − hn5 × [H1(t)] − rk × [H1(t)]2 − hn6 × [200 − H1(t)] − rk × [200 − H1(t)]2 − hn7 × [H2(t)] − rk × [H2(t)]2 − hn8 × [200 − H2(t)] − rk × [200 − H2(t)]2 − hn9 × [u(t)] − rk × [u(t)]2 − hn10 × [18 − u(t)] − rk × [18 − u(t)]2 − hn11 × [am] − rk × [am]2 − hn12 × [18 − am] − rk × [18 − am]2 − hn13 × [b m] − rk × [b m]2 − hn14 × [18 − b m] − rk × [18 − b m]2 (28) 192 Int J Adv Manuf Technol (2012) 62:183–203 where α1 → α6 are the Lagrange multipliers and hn1 → hn14 are the penalty coefficients. Now minimi- zing Lp with respect to H1, H2, Qi, Q0, am, b m, u(t), H1(t + 1), H2(t + 1), Qi(t + 1) and setting the deriva- tives to zero. Minimizing Lp with respect to H1 gives (see Eq. 29): ∂Lp ∂ H1 = −H1(t) − α1 − 1 − [ 1 A1 ( −C12 1 + ( (H1(t) − H2(t)) √ 2g(H1(t) − H2(t)2). ( √ 2g(H1(t) − H2(t)) + (H1(t) − H2(t)).g√ 2g(H1(t) − H2(t)) ) − ( −CL 1 + (H1(t) √ 2g(H1(t))2) . ( √ 2g(H1(t)) + (H1(t).g)√ 2g(H1(t)) ) − 2rk [ (H1(t + 1))− H1(t)− [ 1 A1 ( Qi −C12 arctan(H1 − H2) √ 2g(H1 − H2)−Cl arctan(H1) √ 2gH1 )] ×Ts ] + α2 A2 ( C12 1 + ((H1(t)) − (H2(t)) √ 2g(H1(t) − H2(t)))2 ) . ( √ 2g(H1(t) − H2(t)) − ((H1(t) − H2(t)).g)√ 2g(H1(t) − H2(t)) ) − 2rk(H2(t + 1) − H2(t)) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 )] × Ts )] +α4 ( Cdl√ 2gH1 ) − 2rk [ (QL − Cdl √ 2gH1) ] − hn5(1) − 2rk[(H1)] − hn6(1) − 2rk[(200 − H1)] = 0 (29) Minimizing Lp with respect to H2 gives (see Eq. 30): ∂Lp ∂ H2 = −H2(t) − α1A1 ( −C12 1 + ((H1(t) − H2(t)) √ 2g(H1(t) − H2(t)2 . − ( √ 2g(H1(t) − H2(t)) − ((H1(t) − H2(t)).g√ 2g(H1(t) − H2(t)) ) × 2rk [ (H1(t + 1) − H1(t) − [ 1 A1 (Qi − C12 arctan(H1 − H2) √ 2g(H1 − H2) − Cl arctan(H1) √ 2gH1) ] × Ts ] + α2 A2 ( − 1+ C12 1 + ((H1(t) − (H2(t)) √ 2g(H1(t) − H2(t))2 ) . ( − √2g(H1(t) − H2(t)) − ((H1(t) − H2(t)).g)√ 2g(H1(t) − H2(t)) ) − C0 1 + [ (H2(t) √ 2gH2(t) ]2 . ( √ 2gH2(t) + (H2(t)).g√ 2g(H2(t)) ) − 2rk(H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 )] × Ts )] −α6(1) − 2rk[(e − r + H2)] − hn7(1) − 2rk[(H2)] + hn8(1) − 2rk[(200 − H2)] = 0 (30) Int J Adv Manuf Technol (2012) 62:183–203 193 Minimizing Lp with respect to Qi gives (see Eq. 31): ∂Lp ∂ Qi(t) = −Qi(t) + Q0(t)) + Qi(t) − α3(1 + am) − 2rk [( Qi(t + 1) − Qi(t) −[b m arctan(u) √ 2gu(t) − am Qi(t)] × Ts )] − hn1(1) − 2rk[(Qi)] + hn2(1) − 2rk[(200 − Qi)] = 0 (31) Minimizing Lp with respect to Q0 gives (see Eq. 32): ∂Lp ∂ Q0 = (Qi(t) − Q0(t)) − hn3 − 2rk[(Q0)] + hn4 − 2rk[(200 − Q0)] (32) Minimizing Lp with respect to am gives (see Eq. 33): ∂Lp ∂am = am − α3(Qi(t)) − 2rk [( Qi(t + 1) − Qi(t) − [b m arctan(u) √ 2gu(t) − am Qi(t) ] × Ts )] − hn11 − 2rk[(am)] + hn12 − 2rk[(18 − am)] = 0 (33) Minimizing Lp with respect to b m gives (see Eq. 34): ∂Lp ∂b m = b m − α3 ( − arctan u(t)√2gu(t) ) − 2rk ( H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 ) ] × Ts )] − hn13 − 2rk[(b m)] + hn14 − 2rk[(18 − b m)] = 0 (34) Minimizing Lp with respect to u(t) gives (see Eq. 35): ∂Lp ∂u(t) = u(t) − α3 − b m 1 + (u(t)√2gu(t))2 ·√2gu(t) + u(t) . g√ 2gu(t) − 2rk[(Qi(t + 1) − Qi(t) − [b m arctan(u) √ 2gu(t) − [am Qi(t)] × Ts ] − hn9 − 2rk[(u(t))] + hn10 − 2rk[(18 − u(t))] = 0 (35) 5.4.1 Solving the derivatives Considering all the Lagrange multipliers α, minimizing Lp with respect to the states gives (see Eqs. 36–41): Q0(t) + α3 − α3(am) − hn1 + hn2 = 0 (36) am − α3 Qi(t) − hn11 + hn12 = 0 (37) b m + α3(arctan u(t) √ 2gu(t) − hn13 + hn14 = 0 (38) u + α3 ( b m 1 + (u(t)(√2gu(t))2 )( √ 2gu(t) + u(t).g√ 2gu(t) ) − hn9 + hn10 = 0 (39) H2(t) − α6 − hn7 + hn8 = 0 (40) H1(t) − hn5 − hn6 = 0 (41) ∂Lp ∂ H1(t + 1) ⇒ −α1 = 0 (42) ∂Lp ∂ H2(t + 1) ⇒ −α2 = 0 (43) ∂Lp ∂ Qi(t + 1) ⇒ −α3 = 0 (44) Substituting Eqs. 29–44 gives a new formulation (Eq. 41), which being dependent on α, we need to maximize (see Eqs. 45–55): Q0(t) − hn1 + hn2 = 0 (45) am − hn11 + hn12 = 0 (46) b m − hn13 + hn14 = 0 (47) Qi(t) − Q0(t) − hn3 + hn4 = 0 (48) u + α5 − hn9 + hn10 = 0 (49) H1(t) − hn5 − hn6 + α4 ( gCdl√ 2gH1 ) = 0 (50) H2(t) − α6 − hn7 + hn10 = 0 (51) α1 = H1(t + 1) − H1(t) − [ 1 A1 ( Qi − C12 arctan(H1 − H2) √ 2g(H1 − H2) − Cl arctan(H1) √ 2gH1 )] (52) α2 = H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 ) (53) 194 Int J Adv Manuf Technol (2012) 62:183–203 α3 = Qi(t + 1) − Qi(t) − [ b m arctan(u) √ 2gu(t) − am Qi(t) ] (54) Cdl + α4( √ 2gH1) = 0 (55) Thus, the implication of Lagrangian without implica- tion of augmentation after the insertion of all the con- straint derivatives is as follows (see Eqs. 56–76): Lp = −12 ( C0 √ 2gH2(t) )2 − H1(t) [ 1 2 H1(t) + 200 ] − H2(t) [ 7 2 H2(t) + e − r − 200 ] −1 2 [ Qi(t) − Q20(t) + 1 2 Q2i (t) − 4Qi(t)Q0(t) + 200Qi(t) + 2Q20(t) + 18am + 18b m − u(t) [ 1 2 u(t) + kpe + kIx3 − 18 ] + Cdl ( QL√ 2gH1 − 1 2 Cdl ) − α4 ( 200gCdl√ 2gH1(t) ) +α5(18 − u(t)) + α6(2H2(t) − 200) +α7(Qi(t) − 200) + α8(−Qi(t)) +α9(−200 + Q0(t)) + α10(−Q0(t)) +α11(−H1(t) + 200) + α12(H1(t)) +α13(2H2(t) + e − r − 200) +α14(−2H2(t) − e + r) +α15(−2u(t) + kpe + kIx3 − 18 + u(t)) +α16(u(t) − kpe − kIx3 − u(t)) +α17(am − 18) + α18(−am) +α19(b m − 18) + α20(−b m) (56) where, α1 = H1(t + 1) − H1(t) − 1 A1 (Qi − C12 arctan(H1 − H2 √ 2g(H1 − H2) − Cl arctan(H1) √ 2gH1) (57) α2 = H2(t + 1) − H2(t) − [ 1 A2 (C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2) (58) α3 = Qi(t + 1) − Qi(t) − [b m arctan(u) √ 2gu(t) − am Qi(t) ] (59) α4 = −Cdl√ 2gH1 (60) α5 = α15 − α16 − u(t) (61) α6 = H2(t) − α13 + α14 (62) hn1 = Q0(t) + hn2 (63) hn2 = hn1 − Q0(t) (64) hn3 = Qi(t) − Q0(t) + hn4 (65) hn4 = Q0(t) − Qi(t) + hn3 (66) hn5 = H1(t) − hn6 + α4 ( gCdl√ 2gH1 ) (67) hn6 = H1(t) − hn5 + α4 ( gCdl√ 2gH1 ) (68) hn7 = H2(t) − α6 + hn8 (69) hn8 = −H2(t) + α6 + hn7 (70) hn9 = u(t) + α5 + hn10 (71) hn10 = hn9 − α5 − u(t) (72) hn11 = am + hn12 (73) Int J Adv Manuf Technol (2012) 62:183–203 195 hn12 = hn11 − am (74) hn13 = b m + hn14 (75) hn14 = hn13 − b m (76) Now, the final formulation, dependent on α, penalty function, and augmented Lagrange multiplier, rk yields (see Eq. 77): Lp = −12 ( C0 √ 2gH2(t) )2 − H1(t) [ 1 2 H1(t) + 200 ] − H2(t) [ 7 2 H2(t) + e − r − 200 ] − 1 2 [ Qi(t) − Q20(t) + 1 2 Q2i (t) − 4Qi(t)Q0(t) + 200Qi(t) + 2Q20(t) + 18am + 18b m − u(t) [ 1 2 u(t) + kpe + kIx3 − 18 ] + Cdl ( QL√ 2gH1 − 1 2 Cdl ) − αHT − hn IT − 2rk AT where α = [α α2 α3 α4 −α5 −α6 ] where I = [−hn1 −hn2 −hn3 −hn4 −hn5 −hn6 −hn7 −hn8 −hn9 −hn10 −hn11 −hn12 −hn13 −hn14 ] and HT = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢ ⎣ H1(t + 1) − H1(t) − [ 1 A1 ( Qi − C12 arctan(H1 − H2) √ 2g(H1 − H2) − Cl arctan(H1) √ 2gH1 )] H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) −C0 arctan(H2) √ 2gH2 ) Qi(t + 1) − Qi(t) −[bm arctan(u) √ 2gu(t) − am Qi(t) ] − 200gCdl√ 2gH1(t) 18 − u(t) 2H2(t) − 200 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥ ⎦ T and IT = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢ ⎣ Qi(t) − 200 −Qi(t) −200 + Q0(t) −Q0(t) −H1(t) + 200 H1(t) 2H2(t) + e − r − 200 −2H2(t) − e + r −2u(t) + kpe + kIx3 − 18 + u(t) u(t) − kpe − kIx3 − u(t) am − 18 −am b m − 18 −b m ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥ ⎦ and AT = ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢ ⎣ QL −Cdl √ 2gH1 u(t) −kpe −kIx3 e −r H2 854 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥ ⎦ (77) 5.5 Augmented Lagrangian unconstrained presentation co-states calculation for optimal control The augmented Lagrangian presentation is given as (see Eq. 78): Lμ(k, 1) = −1 2 H1(k)2 − 12 H2(k) 2 + 1 2 u(k)2 − a1(k) × [ H1(t + 1) − H1(t) − [ 1 A1 ( Qi − C12 arctan(H1 − H2) × √2g(H1 − H2) − Cl arctan(H1) × √2gH1 )] × Ts ] − rk 196 Int J Adv Manuf Technol (2012) 62:183–203 × [[ H1(t + 1) − H1(t) − [ 1 A1 ( Qi − C12 arctan(H1 − H2) × √2g(H1 − H2) − Cl arctan(H1) × √2gH1 )] × Ts ]2 − a2(k) × [ H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 )] × Ts ] − rk × [ H2(t + 1) − H2(t) − [ 1 A2 ( C12 arctan(H1 − H2) √ 2g(H1 − H2) − C0 arctan(H2) √ 2gH2 )] × Ts ]2 − a3(k) × [Qi(t + 1) − Qi(t) − [b m arctan(u) √ 2gu(t) − am Qi(t) ] × Ts ] − rk × [Qi(t + 1) − Qi(t) − [b m arctan(u) √ 2gu(t) − am Qi(t) ] × Ts ]2 − a4(k) [ QL − Cdl √ 2gH1 ] − rk × [QL − Cdl √ 2gH1 ]2 − a5(k) × [e − r + H2] − rk × [e − r + H2]2 − a6(k) × [u − kpe − kIx3] − rk × [u − kpe − kIx3]2 − hn1 ×[Qi(k)] − rk × [Qi(k)]2 − hn2 × [200 − Qi(k)] − rk × [200 − Qi(k)]2 − hn3 × [Q0(k)] − rk × [Q0(k)]2 − hn4 × [200 − Q0(k)] − rk × [200 − Q0(k)]2 − hn5 × [H1(k)] − rk × [H1(k)]2 − hn6 × [200 − H1(k)] − rk × [200 − H1(k)]2 − hn7 × [H2(k)] − rk × [H2(k)]2 − hn8 × [200 − H2(k)] − rk × [200 − H2(k)]2 − hn9 × [u(k)] − rk × [u(k)]2 − hn10 × [18 − u(k)] − rk × [18 − u(k)]2 − hn11 × [am] − rk × [am]2 − hn12 × [18 − am] − rk × [18 − am]2 − hn13 × [b m] − rk × [b m]2 − hn14 × [18 − b m] − rk × [18 − b m]2 (78) The co-states αk are determined by backward integra- tion of the adjunct state equation yielding (see Eq. 79): ⎡ ⎣ α1 α2 α3 ⎤ ⎦ k−1 = −2hk ∂Ed ∂xk − FTd λk − hk [ N∑ i=1 ∇xk μs(σ jk, sdli (xk)) ] − hk ⎡ ⎣ N∑ j=1 ∇xkφμg(ρ jk, gDj (xk, υk, hk)) ⎤ ⎦ (79) where xk+1 = f Ddz (xk, τk, hk) , k = 0, ......., N − 1 gDj (xk, υk, hk) ≤ 0, j ∈ {1, 2, ...... j } FTd = Id Ed = H1, H2, u (80) 5.6 Implementation of augmented Lagrangian iterative algorithm for control of two-tank system The two-tank system data simulation has been done on Matlab using the differential state equations represent- ing the model of the two-tank system (see Fig. 9, 10, 11, and 12). Generating the augmented Lagrangian itera- tive algorithm yields the following states of the system: the Lagrange multipliers and augmented Lagrangians. The profiles of u can be seen in Figs. 13, 14, and 15. It can be seen that the iteration process is giving updated results in the control u profile, with giving a steady profile of u in Fig. 15. 5.7 Genetic algorithm-based implementation of augmented Lagrangian Because of highly diversified system, the standard func- tional technique of implementing the genetic algorithm was not successful; therefore, a generalized genetic algorithm has been designed to capture all the critical sides of implementing the algorithm. The following are the steps for the genetic algorithm. Int J Adv Manuf Technol (2012) 62:183–203 197 Fig. 9 Two-tank implementation for the height of H1 tank Fig. 10 Two-tank implementation for the height of H2 tank Fig. 11 Two-tank implementation for the flow Qi Fig. 12 Two-tank implementation for the flow Ql 5.7.1 Step 1: genetic algorithm function (section written again in algorithmic format) The following is the function of the genetic algorithm developed. Step 1 goes with the genetic algorithm de- veloped. Details of the step 1 can be seen as follows in program 5.7.1.1. Program 5.7.1.1 Step 1: genetic algorithm function \FUNCTION[global best, cost convergence] \genetic(num genes, min gene, max genetic pop size, minimize, elite count, tour size, prob mut, max gen, alpha) \genetic(a, b, c, d, e, f, g, h, i, j) \REQUIRE a = number of genes b = row vector containing minimum values of genes c = row vector containing maximum values of gene d = population size e = ’1’ for minimize and ’0’ for maximize f = number of elite chromosomes (must be an even number) g = tournament size for parent chromosome selection h = probability of mutation i = maximum number of generations j = alpha for BLX-alpha crossover, typical values are 0.4 to 0.5 \ENDFOR 198 Int J Adv Manuf Technol (2012) 62:183–203 Fig. 13 Step 1: iterative method implementation for the profile of control u Initializing best solution matrix with zeros: best = zeros (max gen,num genes + 1); 5.7.2 Step 2: initial population generation (section written again in algorithmic format) Step 2 follows the initial population generation. Details of the step 2 can be seen as follows in program 5.7.2.1. Program 5.7.2.1 Step 2: initial population generation pop = repmat(min gene,pop size,1) + rand(pop size,1) *(max gene - min gene) Fig. 14 Step 2: iterative method implementation for the profile of control u Fig. 15 Step 3: iterative method implementation for the profile of control u 5.7.3 Step 3: f itness function evaluation (section written again in algorithmic format) Step 3 follows the fitness function evaluation. Details of the step 3 can be seen as follows in program 5.7.3.1. Program 5.7.3.1 Step 3: fitness function evaluation \FOR gen = 1:1:max\_gen \COST = costfunc(pop) Calculating fitness \IF minimize == 1 \THEN fitness = 1./cost \ELSE fitness = cost \END 5.7.4 Step 4: selection of elite chromosomes and best solutions (section written again in algorithmic format) Step 4 follows the selection of elite chromosomes and best solutions. Details of the step 4 can be seen as follows in program 5.7.4.1. 5.7.5 Step 5: applying crossover (section written again in algorithmic format) Step 5 follows the applying of crossover. Details of the step 5 can be seen as follows in program 5.7.5.1. Finding crossover point by generating a random integer between “1” and “number-of-genes-minus-one.” Int J Adv Manuf Technol (2012) 62:183–203 199 Program 5.7.4.1 Step 4: selection of elite chromosomes and best solutions pop\_sort = [pop fitness];} pop\_sort = sort rows (pop\_sort,(num\_genes+1)); elite = pop\_sort (end-elite\_count+1:end,1:end-1); best(gen,:) = pop\_sort(end,:); \GENERATION FORMATION new\_gen = zeros(pop\_size,num\_genes); new\_gen(end-elite\_count+1:end,:) = elite; \FOR g\_update=1:1:(pop\_size-elite\_count)/2 \TOURNAMENT SELECTION tour\_index = 1+(pop\_size-1) *rand(2,tour\_size); tour\_index = round(tour\_index); parent\_ind = max(tour\_index’); \PREVENTING ASEXUAL REPRODUCTION \WHILE parent\_ind {1} == parent\_ind {2} [a,b] = max(tour\_index(2,:)); tour\_index(2,b) = 0; parent\_ind {2} == max(tour\_index(2,:)); \END selected parents are given by parent(1,:) = pop\_sort(parent\_ind(1),1:end-1); parent(2,:) = pop\_sort(parent\_ind(2),1:end-1); 5.7.6 Step 6: comparison (section written again in algorithmic format) Step 6 follows comparing children with parents and selecting the ones that are more fit to pass on to the Program 5.7.5.1 Step 5: applying crossover c = 1+(num\_genes-1-1)*rand; c = round(c); child(1,:) = [parent(1,1:c) parent(2,c+1:end)]; child(2,:) = [parent(2,1:c) parent(1,c+1:end)]; Applying BLX-alpha crossover} \FOR blx=1:num\_genes pblx = [parent(1,blx) parent(2,blx)]’;} pmax = max(pblx); pmin = min(pblx);} I = pmax - pmin;} child(1,blx) = (pmin-I*alpha)+(pmax+I *alpha - pmin-I*alpha)*rand;} child(2,blx) = (pmin-I*alpha)+(pmax+I*alpha - pmin-I*alpha)*rand; } saturating the value of child gene if child gene exceeds the} constraint on the gene} \FOR child\_check=1:1:2} \IF child(child\_check,blx) $$ max\_gene(blx)} \{child(child\_check,blx) = max\_gene(blx);} \END \END \END next generation. Details of the step 6 can be seen as follows in program 5.7.6.1. Program 5.7.6.1 Step 6: comparison comp = [parent; child]; [cost\_comp]=costfunc(comp); \CALCULATING FITNESS \IF minimize == 1 fit\_comp = 1./cost\_comp; \ELSE fit\_comp = cost; \END comp = [comp fit\_comp]; comp = sortrows(comp,(num\_genes+1)); selecting the two most fit out of the 4 parents and children new\_gen((2*g\_update)-1:(2*g\_update),:) = comp(end-1:end,1:end-1); 5.7.7 Step 7: mutation (section written again in algorithmic format) Finding mutation point by generating a random integer between “1” and “number-of-genes” for each child if d 200 Int J Adv Manuf Technol (2012) 62:183–203 Program 5.7.8.1 Step 8: formulating global best and cost convergence \IF minimize == 1} cost\_convergence = 1./(best(:,num\_genes+1)); \ELSE cost\_convergence = best(:,num\_genes+1); \END global-best = best(:,1:num\_genes);}\\ The following section explains the results obtained after the implementation of the genetic algorithm: 5.7.9 GA-based optimization of parameters of the tank system The genetic algorithm is used over here to optimize the height and control parameters of the coupled tank system. As PI controller is used to control the height of the tanks, the main agenda are to get the optimal value of the gain which is multiplied by the proportional and integral part of the controller to the get the maximum output with minimum energy. Figure 16 shows the result of genetically optimized augmented Lagrangian based for the height of H1 tank. As the water flows through tank of height H1 first and then moves to tank of height H2, therefore the optimized height of water flowing in tank 1 is more. Figure 17 shows the result of genetically optimized augmented Lagrangian based for the height of H2 tank. Figure 18 shows the result of genetically optimized augmented Lagrangian-based controller voltage u for minimum energy. Fig. 16 Genetic algorithm-based optimization for the height of H1tank Fig. 17 Genetic algorithm-based optimization for the height of H2tank 5.8 Particle swarm optimization-based implementation of augmented Lagrangian Particle swarm-based augmented Lagrangian is devel- oped. The developed algorithm is applied on the above defined problem to search for optimal value of control input data clusters. The number of iterations is kept 15, population size is kept 75, cognitive and social parame- ters c1 and c2 are kept equal to 2, and constraints on the radii, as defined above, are observed strictly. The convergence of objective function is shown in Fig. 19. Cost function convergence to optimal or near optimal solution regardless of initial solution demonstrates the robustness of the algorithm. Fig. 18 Genetic algorithm-based optimization for the controller voltage u Int J Adv Manuf Technol (2012) 62:183–203 201 Fig. 19 PSO-based cost convergence for controller voltage u 6 Comparison of results between iterative method and genetic algorithm For the gist of optimal control, the result is being com- pared for the methods of iteration-based algorithm and PSO–GA-based approach as can be seen in this section from Figs. 20 and 21. It can be seen that both iterative- based algorithm and PSO–GA-based algorithm were able to provide a “controlled” u profile in the given number of iterations, but the genetic algorithm has an upper edge than the iterative algorithm because it is providing an economical value of u. Fig. 20 Iterative algorithm-based optimal control Fig. 21 PSO–GA-based optimal control Remark 2 As both PSO and GA are computation in- tensive algorithms, they require a lot of time to opti- mize. To minimize the computational time and get the quick optimal values in critical situations, it is proposed to use parallel computing of both algorithms to reduce the time computation. Moreover, a quick optimal value can be captured by computing parallel the algorithms and running less iterations and use the offline data to work on full computation and best optimal results. 7 Surface plots for analysis For the analysis of the effect of control/energy u, on the height of first tank, height of second tank, and the Fig. 22 Surface view for height of tank 1, initial flow, and con- trol/energy 202 Int J Adv Manuf Technol (2012) 62:183–203 Fig. 23 Surface view for height of tank 2, initial flow, and control/energy initial flow, the result is being shown as can be seen in this section from Figs. 22 and 23. It can be seen that in case Fig. 22, as H1 is involved, we capture a large hydraulic height as compared to Fig. 23 where the hydraulic height of H2 is less. This is only because of the factor that water flows from tank 1 and then reaches tank 2. 8 Conclusion In this paper, we presented an optimal control ap- proach to a constrained optimization nonlinear fault diagnosis problem, based on a combination of strate- gies like augmented Lagrangian and PSO–GA-based approach. A detailed optimization derivation of the dynamic augmented Lagrangian based on optimal con- trol theory has been made. As such, this augmented Lagrangian-based approach can be made an effective part of an overall approach that tackles both optimal control of the system and optimization of the nonlinear constraints. This optimal control approach ensures the optimal height of water at minimum energy level. The effectiveness of this scheme has been evaluated on a bench-marked laboratory scaled two-tank system. In future study, comparison of the performances of PSO– GA-based algorithm with the performances of individ- ual evolution method might be worth investigating. Acknowledgements Authors wish to acknowledge the value able suggestions and useful comments of two anonymous re- viewers. Their suggestions and comments have improved the quality of the paper immensely. Authors also acknowledge the financial support by the Natural Sciences and Engineering Research Council (NSERC) of Canada and the King Fahd University of Petroleum and Minerals, respectively, in carrying out this work. References 1. Bemporad A, Morari M (1999) Control of systems inte- grating logic, dynamics and constraints. Automatica 35(3): 407–427 2. Avraam MP, Shah N, Pantelides CC (1998) Modelling and optimization of general hybrid systems in the continuous time domain. Comput Chem Eng 22(1):221–228 3. Vassiliadis VS, Sargent RWH, Pantelides CC (1994) Solu- tion of a class of multistage dynamic optimization problems. 1. Problems without path constraints. Ind Eng Chem Res 33:2111–2122 4. Vassiliadis VS, Sargent RWH, Pantelides CC (1994) Solution of a class of multistage dynamic optimization problems. 2. Problems with path constraints. Ind Eng Chem Res 33:21231– 2133 5. Ruban A (1997) Sensitivity coefficients for discontinuous dynamic systems. Comput Syst Sci Int 36(4):536–542 6. Stoer J, Bulirsch R (1980) Introduction to numerical analysis. Springer, New York 7. Robinson S (1980) Strongly regular generalized equations. Math Oper Res 5(1):43–62 8. Alt W (1990) Lagrange–Newton method for infinite dimen- sional optimization problems. Numer Funct Anal Optim 11(3–4):201–224 9. Alt W (1991) Parametric programming with applications to optimal control and sequential quadratic programming. Mathematische Schriften 34(1):1–37 10. Alt W (1990) Stability of solutions and the Lagrange–Newton method for nonlinear optimization and optimal control prob- lems. Habilitationsschrift, Universität Bayreuth, Bayreuth 11. Alt W, Malanowski K (1993) The Lagrange–Newton method for nonlinear optimal control problems. Comput Optim Appl 2(1):77–100 12. Khoukhi A, Baron L, Balazinski M (2008) Constrained multi- objective trajectory planning of parallel kinematic machine. Robot Comput-Integr Manuf 25:756–769 13. Khoukhi A (2002) Dynamic modeling and optimal time- energy off-line programming for mobile robots, a cybernetic problem. Kybernetes 31(5–6):731–735 14. Khoukhi A, Baron L, Balazinski M (2007) Constrained multi- objective off-line programming of parallel kinematic ma- chines. Technical report, Ec´ole Polytechnique de Montreal, CDT-P2877-05, 70 pp 15. Jong K, Spears W (1989) Using genetic algorithms to solve NP-complete problems. In: Proceedings of the third in- ternational conference on genetic algorithms. Kaumann, Waltham 16. Goldberg D, Zakrzewski K, Sutton B, Gadient R, Chang C, Gallego P, Miller B, Cantu-Paz E (1997) Genetic algorithms: a bibliography. Illegal report no. 97011. Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana- Champaign, Champaign 17. Miller BL, Goldberg DE (1996) Optimal sampling for genetic algorithms. In: Proceedings of the artificial neural networks in engineering (ANNIE ’96) conference 6. ASME, Fairfield, pp 291–297 18. Eberhart R, Shi Y (1998) Parameter selection in parti- cle swarm optimisation. In: Evolutionary programming VII, pp 591–600 Int J Adv Manuf Technol (2012) 62:183–203 203 19. Shi Y, Eberhart R (1999) Empirical study of particle swarm optimization. In: Proceedings of the 1999 congress on evolu- tionary computation, pp 1945–1950 20. Eberhart R, Shi Y (1998) Comparison between genetic algo- rithms and particle swarm optimization. Lect Notes Comput Sci 1447/1998:611–616 21. Kennedy J, Eberhart R (1995) Particle swarm opti- mization. In: Proceedings of IEEE international confer- ence on natural networks, Perth, Australia, pp 1942– 1945 22. Kennedy J, Eberhart R (2001) Swarm intelligence. Academic, San Diego Nonlinear constrained optimal control problem: a PSO--GA-based discrete augmented Lagrangian approach Abstract Introduction Related works Problem statement System model and description Experimental setup and process data collection Model of the coupled tank system Implementation and simulation results Kalman filter-based feasible solution for optimal control Cost function of the system Augmented Lagrangian method for optimal control Lagrangian algorithmic framework of optimal control Augmented Lagrangian implementation in optimal control Solving the derivatives Augmented Lagrangian unconstrained presentation co-states calculation for optimal control Implementation of augmented Lagrangian iterative algorithm for control of two-tank system Genetic algorithm-based implementation of augmented Lagrangian Step 1: genetic algorithm function (section written again in algorithmic format) Step 2: initial population generation (section written again in algorithmic format) Step 3: fitness function evaluation (section written again in algorithmic format) Step 4: selection of elite chromosomes and best solutions (section written again in algorithmic format) Step 5: applying crossover (section written again in algorithmic format) Step 6: comparison (section written again in algorithmic format) Step 7: mutation (section written again in algorithmic format) Step 8: formulating global best and cost convergence (section written again in algorithmic format) GA-based optimization of parameters of the tank system Particle swarm optimization-based implementation of augmented Lagrangian Comparison of results between iterative method and genetic algorithm Surface plots for analysis Conclusion References