Two differential equation systems for inequality constrained optimization

April 26, 2018 | Author: Anonymous | Category: Documents
Report this link


Description

Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, Liaoning Province, China Abstract 1. Introduction q The research is supported by the National Natural Science Foundation of China under project Grant No. 10471015 and by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry. * Corresponding author. Address: Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, Liaoning Province, China. E-mail addresses: [email protected] (L. Jin), [email protected] (L.-W. Zhang). Applied Mathematics and Computation 188 (2007) 1334–1343 www.elsevier.com/locate/amc 0096-3003/$ - see front matter � 2006 Elsevier Inc. All rights reserved. Consider the inequality-constrained optimization problems min f ðxÞ s:t: giðxÞ 6 0; i ¼ 1; . . . ;m; ð1:1Þ where f : Rn ! R; gi : Rn ! R, i = 1, . . . ,m twice continuously differentiable. Differential equation methods, for solving equality constrained optimization problems, have been proposed by Arrow and Hurwicz [1], This paper presents two differential systems, involving first and second order derivatives of problem functions, respec- tively, for solving inequality constrained optimization problems. Local minimizers to the optimization problems are proved to be asymptotically stable equilibrium points of the two differential systems. First, the Euler discrete schemes with constant stepsizes for the two differential systems are presented and their convergence theorems are demonstrated. Second, an Euler discrete scheme of the second differential system with an Armijo line search rule, is proposed and proved to have the locally quadratic convergence rate. The numerical results based on solving the second system of differential equations show that the Runge–Kutta method for the second system has good stability and the Euler discrete scheme the Armijo line search rule has fast convergence. � 2006 Elsevier Inc. All rights reserved. Keywords: Inequality-constrained optimization; Constraint qualification; Differential equation; Asymptotical stability; Equilibrium point b Department of Mathematics, Anshan Normal University, Anshan 114005, Liaoning Province, China doi:10 Li Jin a,b,*, Li-Wei Zhang a, Xian-Tao Xiao a a Two differential equation systems for inequality constrained optimization q .1016/j.amc.2006.10.085 Zhad its eq enko where s > 0 � � � � Furth L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 1335 (1) The multipliers l* > 0, i 2 I (x*). (2) The gradients $gi(x*), i 2 I (x*) are linearly independent. (3) yTr2xxLðx�; l�Þy > 0 80 6¼ y 2 fyjrgiðx�ÞTy ¼ 0; i 2 Iðx�Þg. To obtain the numerical solution, we propose a nonlinear Lagrangian for problem (1.1), in the following form: F rðx; yÞ ¼ f ðxÞ þ r Xm i¼1 y2i ðer �1giðxÞ � 1Þ: Let zT = (xT,yT). By differentiating Fr(x,y) with respect to x and y, respectively, we construct the following system l P 0; li giðx Þ ¼ 0; giðx Þ 6 0; i ¼ 1; . . . ;m: ð2:1Þ ermore, let the Jacobian uniqueness conditions, proposed in [2], hold at (x*,l*): above system becomes the one of Evtushenko and Zhadan [9]. Along this line, Zhang [11] studied a modified version to (1.2) for solving equality constrained optimization problems. In this paper, we consider a direct way for constructing a system of differential equations to solve inequality constrained problems without using space transformations of Evtushenko and Zhadan [9]. In Section 2, we propose an nonlinear Lagrangian, Fr(x,y), and construct a system of differential equations by using this func- tion, which does not need a space transformation. We prove that the system is asymptotically stable at local minimum points of the problems. In Section 3, a differential system which involve the second order derivatives of problem functions is constructed and the Euler discrete scheme for the differential system with constant stepsizes is proved to be convergent. Furthermore, an Euler discrete scheme with Armijo line search for the second system is proposed and its global convergence and the locally quadratic convergence rate are demon- strated. In Section 4, the Runge–Kutta method and the Euler discrete scheme with Armijo line search are implemented to solve the second differential equation system. The numerical results obtained show that Runge–Kutta method has good stability and high precision and the Euler discrete scheme with Armijo line search has the fast convergence. 2. The first differential system Let Lðx; lÞ ¼ f ðxÞ þPmi¼1ligiðxÞ be the (classical) Lagrangian for problem (1.1) and I(x) = {ijgi(x) = 0, i = 1, . . . ,m} denote the active set of indices at x. Let x* be a local minimizer to problem (1.1) and (x*,l*) be the corresponding Kuhn–Tucker point, which satisfies the following conditions: rxLðx�; l�Þ ¼ rf ðx�Þ þ Xm i¼1 l�irgiðx�Þ ¼ 0; dhðxÞ=dt ¼ �shðxÞ; ð1:2Þ BðxÞ 2 Rn;n is symmetrically positive definite, h : Rn ! Rl, $h(x) is the Jacobian transpose of h at x and is a constant. Actually, if we choose s � 1, then the system in [4] is obtained and if B(x) � In·n, then the proaches for optimization problems over closed sets with equality constraints. This class of problems include inequality constrained problems as special cases. They convert the concerned problem to an equality con- strained one by using the so-called space transformations, and then construct a system of differential equations to solve the resulted equality constrained problem. For equality constrained problems, the systems defined by [4] and Evtushenko and Zhadan [9] are quite similar, and both are of the form BðxÞdx=dt ¼ �ðrf ðxÞ þ rhðxÞuðxÞÞ; an [6–10]. The main idea of this type of methods is to construct a system of differential equations so that uilibrium points of this system coincide with the solutions to the equality constrained problem. Evtush- and Zhadan have studied, by using the stability theory of differential equations, differential equation ap- Fiacco and Mccormick [2] and Evtushenko [3], and developed by Yamadhita [4], Pan [5] and Evtushenko and of differential equations: 26 7 The following lemma will be used in the proof of the forthcoming theorem. l = (l Lemm then (i) F (x ,y ) = L(x ,l ) = f(x ); (iv) there is r > 0 and c > 0, for any r 2 (0,r ], such that ¼ f ðx Þ þ r ðy Þ ðe i � 1Þ ¼ f ðx Þ: 2 � � 2 � � 2 r�1giðx�Þ 2 � � 2 r�1giðx�Þ � � T wher Jacob Lemm (2.2) 1336 L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 rxxF rðx ; y Þ ¼ rxxf ðx Þ þ i¼1 ðyi Þ e rxxgiðx Þ þ i¼1 ðyi Þ e rgiðx Þrgiðx Þ ¼ r2xxLðx�; l�Þ þ r�1rgðx�ÞTDrgðx�Þ; e D ¼ ½diagl�i �mi¼1. Let A ¼ r2xxLðx�; l�Þ; B ¼ ½rg1ðx�Þ; . . . ; rgrðx�Þ�T and U = D in Lemma 2.1, using the ian uniqueness conditions, we obtain (iv). The proof is completed. h a 2.3. Let the Jacobian uniqueness conditions hold at (x*,l*). Then (x*, y*) is an equilibrium point of system 2 i¼1 i¼1 By calculating, we have that Xm Xm rxF rðx�; y�Þ ¼ rf ðx�Þ þ Xm ðy�i Þ2er �1giðx�Þrgiðx�Þ ¼ rf ðx�Þ þ Xr l�irgiðx�Þ ¼ rxLðx�; l�Þ ¼ 0: Let ðy�i Þ2 ¼ l�i ði ¼ 1; . . . ;mÞ, we have from the definition of a Kuhn–Tucker point that i¼1 i xx r Proof. Without loss of generality, we assume that I(x*) = {1, . . . , r} where r 6 m. It follows from the definition of a Kuhn–Tucker point that F rðx�; y�Þ ¼ f ðx�Þ þ r Xm i¼1 ðy�i Þ2ðer �1giðx�Þ � 1Þ � Xr � 2 r�1g ðx�Þ � if and only if (x*,l*) be a Kuhn–Tucker pair of (1.1) with ðy�i Þ ¼ l�i ði ¼ 1; . . . ;mÞ. 0 0 hr2 F ðx�; y�Þw;wiP chw;wi 8w 2 Rn: r (ii) $xFr(x*, y*) = $xL(x*,l*) = 0; (iii) r2xxF rðx�; y�Þ ¼ r2xxLðx�; l�Þ þ r�1rgðx�ÞTDrgðx�Þ; where D ¼ ½diagl�i �mi¼1; hðAþ kBTUBÞx; xiP chx; xi 8x 2 Rn: a 2.2. Let (x*,l*) be a Kuhn–Tucker point of (1.1), the Jacobian uniqueness conditions hold at (x*, l*), there is y� 2 Rm, such that ðy�i Þ2 ¼ l�i ði ¼ 1; . . . ;mÞ, the following conclusions are satisfied: * * * * * then there are scalars k0 > 0 and c 2 (0,k) such that, for any kP k0, 1, . . . ,lr) > 0. If k > 0 is a scalar and By ¼ 0 implies hAy; yiP khy; yi Lemma 2.1 [12]. Let A be a n · n symmetrical matrix, B be a r · n matrix, U ¼ ½diag li�ri¼1 where .. . 2rymðer�1gmðxÞ � 1Þ 664 775 dz=dt ¼ �rf ðxÞ �Pm i¼1 y2i e r�1giðxÞrgiðxÞ 2ry1ðer�1g1ðxÞ � 1Þ 2ry ðer�1g2ðxÞ � 1Þ 2 666666 3 777777: ð2:2Þ Proof Lemma 2.2 that On from imply l�i ¼ Proof neigh and Q and f b. Le Q . Then L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 1337 1 Re ½z^1; z^2�Q1 z1 � �� � ¼ Re a½z^1; z^2� z1 � �� � ¼ ReðaÞðjz1j2 þ jz2j2Þ: ð2:8Þ t a be an eigenvalue of Q1 and ðz1; z2Þ 2 R � R the corresponding nonzero eigenvector of the matrix �k ¼ min nþm�rþ16i6nþm ki > 0: ð2:7Þ Let y^ denote the conjugate vector of a complex vector y, Re(b) denote the real part of the complex number T n r rom the assumptions we have and Q2 ¼ �½diag2rðer �1giðxÞ � 1Þ�mi¼rþ1: The stability of system (2.2) is determined by the properties of the roots of the characteristic equation detðQ� kInþmÞ ¼ 0; ð2:6Þ which is equivalent to the following two equations: jQ1 � kInþrj ¼ 0; jQ2 � kIm�rj ¼ 0: The solutions of the second equation are explicitly expressed as ki ¼ �2rðer�1giðxÞ � 1Þ; nþ m� r þ 1 6 i 6 nþ m Q1 ¼ .. . 0 �2y�rrgrðx�ÞT 664 775 ð2:5Þ �2y�1rg1ðx�ÞT 666 777 dz=dt ¼ �Qðz� z Þ; ð2:3Þ where Q ¼ Q1 0 0 Q2 � � ð2:4Þ 1 2 RðnþrÞ�ðnþrÞ and Q2 2 Rðm�rÞ�ðm�rÞ are given by r2xxF rðx�; y�Þ 2y�1rg1ðx�Þ; . . . ; 2y�rrgrðx�Þ 2 3 rium point z* � . Without loss of generality, we assume that I(x ) = {1, . . . , r} with r 6 m. Linearizing system (2.2) in the borhood of z*T = (x*T,y*T), we obtain the equation of the first approximation of (2.2) about the equilib- * 0 0 ðy�i Þ2 ði ¼ 1; . . . ;mÞ. Theorem 2.1. Let(x*,l*) be a Kuhn–Tucker point of (1.1), the Jacobian uniqueness conditions hold at (x*,l*). Then there exists k > 0 such that for kP k ,the system with r > 0 is asymptotically stable at (x*, y*), where Now we prove the system (2.2) is asymptotically stable and the Euler iterative scheme with constant stepsizes is locally convergent. h ing that (x*,l*) be a Kuhn–Tucker point of (1.1). rxLðx�; l�Þ ¼ 0; l� P 0; l�i giðx�Þ ¼ 0; giðx�Þ 6 0; i ¼ 1; . . . ;m the contrary, let (x*,y*) be an equilibrium point of (2.2). Defining l�i ¼ ðy�i Þ ði ¼ 1; . . . ;mÞ, we have Lemma 2.2 and the Jacobian uniqueness conditions that rxF rðx�; y�Þ ¼ rxLðx�; l�Þ ¼ 0; which implies that (x*,y*) is an equilibrium point of (2.2). 2 . Let (x*,l*) be a Kuhn–Tucker pair of (1.1). By introducing y�i ¼ ffiffiffiffiffi l�i p ði ¼ 1; . . . ;mÞ, we have from z2 z2 It fol wher In vie nov’s rium P wher Theo * * * * Then to (x* rf ðxÞ þ y e rg ðxÞ r g1ðxÞ . 1338 L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 . r�1gmðxÞ 4 5 F ðzÞ ¼ z� t �2ry1ðe � 1Þ �2ry2ðer�1g2ðxÞ � 1Þ . 666666 777777 : ð2:15Þ i¼1 i i �1 666 777 Pm 2 r�1giðxÞ 2 3 Proof. Let zT = (xT,yT), since z* is a fixed point of the mapping rem 2.2. Let (x ,l ) be a Kuhn–Tucker pair of (1.1), the Jacobian uniqueness conditions hold at (x ,l ). there exists a �t > 0 such that for any 0 < tk < �t the iterations defined by (2.14) with r > 0 converge locally ,y*), where l�i ¼ ðy�i Þ2 ði ¼ 1; . . . ;mÞ. The following theorem tells us that the Euler scheme with constant constants is locally convergent. e tk is a stepsize. h yðkþ1Þi ¼ yki þ 2rtkykiðer �1giðxkÞ � 1Þ; i ¼ 1; . . . ;m;>: xkþ1 ¼ xk � tkðrf ðxkÞ þ i¼1 y2kie r�1giðxkÞrgiðxkÞÞ;>< ð2:14Þ Integrating the system defined by (2.2) using Euler method, one obtains the iterate process m 8 stability theorem of the first-order approximation that (x*,y*) is a local asymptotically stable equilib- point of (2.2). which, from the Jacobian uniqueness conditions, yields z2 = 0. This contradicts with (z1,z2)5 0. Therefore, by noting that (2.7) and (2.13), we have that all eigenvalues of Q have negative real parts. It follows from Lyapu- ð2y1rg1ðx Þ; . . . ; 2yrrgrðx ÞÞz2 ¼ 0; If z1 = 0, then Q1 z1 z2 � � ¼ a z1 z2 � � and � � � � w of Lemma 2.2, the matrix r2xxF rðx�; y�Þ is positive definite, we obtain for z15 0 that Refz^1Tr2xxF rðx�; y�Þz1g > 0: ð2:13Þ we have from (2.9) and (2.8) that Re ½z^1; z^2�Q1 z1 z2 � �� � ¼ Refz^1Tr2xxF rðx�; y�Þz1g: ð2:11Þ It follows from (2.7) and (2.10) that ReðaÞðjz1j2 þ jz2j2Þ ¼ Refz^1Tr2xxF rðx�; y�Þz1g: ð2:12Þ lows from the definition of Q1 that Re ½z^1; z^2�Q1 z1 z2 � �� � ¼ Re z^1Tr2xxF rðx�; y�Þz1 þ z^2TQ21z1 þ z^1TQ12z2 � � ; ð2:9Þ e Q12 ¼ ð2y�1rg1ðx�Þ; . . . ; 2y�rrgrðx�ÞÞ, Q21 ¼ �QT12. Since for any real matrix M, Refz^TMTwg ¼ Refw^TMzg; ð2:10Þ �2rymðe � 1Þ zkþ1 ¼ F ðzkÞ conve and m From Let then (2.14) 3. Th lem ( value i¼1 �1 66 77 We c �1 �1 L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 1339 Lemma 3.1. Let conditions of Theorem 2.1 be satisfied at the the point (x*,l*). Then, for r > 0, K(z*) is a � � 2 * * * nonsi .. . . . . �2ymer�1gmðxÞrgmðxÞT �2rðer�1gmðxÞ � 1Þ 64 75 KðzÞ ¼ �2y1er g1ðxÞrg1ðxÞ �2rðer g1ðxÞ � 1Þ666 777: r2xxF rðx; yÞ 2y1er g1ðxÞrg1ðxÞ � � � 2ymer gmðxÞrgmðxÞ �1 T �16 7 an easily get the formula2 3 �2rymðer�1gmðxÞ � 1Þ .. .64 75 /ðzÞ ¼ �2ry1ðer g1ðxÞ � 1Þ �2ry2ðer�1g2ðxÞ � 1Þ 66666 77777; z T ¼ ðxT; yTÞ: ð3:2Þ .. 2rymðer�1gmðxÞ � 1Þ 4 5 where (c1, . . . ,cn+m) is a scaling vector, K(z) is the Jacobian matrix of the mapping rf ðxÞ þPm y2i er�1giðxÞrgiðxÞ26 3 7 1.1). Now we discuss the Newton version. The continuous version of Newton method leads to the initial problem for the following system of ordinary differential equations: KðzÞdz=dt ¼ diag16i6nþmci �rf ðxÞ �Pm i¼1 y2i e r�1giðxÞrgiðxÞ 2ry1ðer�1g1ðxÞ � 1Þ 2ry2ðer�1g2ðxÞ � 1Þ . 2 666666666 3 777777777 ; ð3:1Þ The method in Section 2 can be viewed as a gradient approach, through the system (2.2), for solving prob- �t ¼ minf2aj=ða2j þ b2j Þj1 6 j 6 nþ mg the spectral radius of $F(z*)T is strictly less than 1 for t < �t, and the iterations generated by the scheme is locally convergent to (x*,y*) (see [6]). The proof is completed. h e second differential system t < minf2aj=ða2j þ b2j Þ j 1 6 j 6 nþ mg: rge to z* whenever z0 is in a neighborhood of z* and 0 < tk < �t. Let $F(z) be Jacobian transpose of F(z) 1, . . . ,mn+m be the eigenvalues of the matrix $F(z*) T with the expression mj ¼ ð1� tajÞ � iðtbjÞ: the proof of Theorem 2.1 one has that aj > 0. The condition jmjj < 1 can be written as The convergence of the iterations (2.14) will be proved if we demonstrate that a �t can be chosen such that the iterations defined by ngular matrix, where li ¼ ðyi Þ ði ¼ 1; . . . ;mÞ, z = (x ,y ). Proof. L I(x*) = { Kð wher b ically Step Step Step 1340 L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 Eðzk þ aidkÞ 6 ð1� 2qaiÞEðzkÞ: Step 1: Given a 2 (0,1), ci = 1(1 6 i 6 n + m), q 2 (0,1/2), eP 0 and initial point zT0 ¼ ðxT0 ; yT0 Þ, k :¼ 0. 2: If E(zk) 6 e, then stop; otherwise, computer the search direction dk ¼ �KðzkÞ�1/ðzkÞ: 3: Computer the approximate iterate point zkþ1 ¼ zk þ hkdk; where hk ¼ aik , ik is the smallest nonnegative integer i satisfying Theorem 3.2. Let conditions of Theorem 2.1 be satisfied at the the point (x ,l ). Then there exists a t > 0 such that for any 0 < tk < �t, the iterations defined by (3.3) with r > 0 converge locally to (x*, y*), where l�i ¼ ðy�i Þ2 ði ¼ 1; . . . ;mÞ. We adopt the Armijo line search rule to determine step lengths when the Euler method is used. Let E(z) = k/(z)k2 be the merit function. Algorithm 3.1. * * � Integrating the system (3.1) by the Euler method, one obtains the iterate process zkþ1 ¼ zk � hkdiag16i6nþmciKðzkÞ�1/ðzkÞ: � ð3:3Þ stable. Matrix Qðz Þ is similar to matrix diag16i6n+mci; therefore, they have the same eigenvalues ki = ci > 0,1 6 i 6 n + m. According to Lyapunov linearization principle, we have that the equilibrium point z* is asymptot- detðbQðz Þ � kInþmÞ ¼ 0: � Linearizing system (3.1) at the point z , we obtain ddðzÞ=dt ¼ �bQðz�ÞdðxÞ; bQðz�Þ ¼ Kðz�Þ�1diag16i6nþmciKðz�Þ: The stability of this system is determined by the properties of the roots of the characteristic equation � e d(z) = z � z*, H(d(z)) = O(kd(z)k2). * .. . . . . . . . 0m 0 2y�me r�1gmðx�Þrgmðx�Þ 664 775 Hence the matrix K(z*) coincides with the matrix Q introduced in (2.4). It was shown in the proof of Theorem 2.1 that all eigenvalues of this are strictly positive. Therefore, K(z*) is nonsingular. h Theorem 3.1. Let conditions of Theorem 2.1 be satisfied at the point (x*,l*). Then the system (3.1) with r > 0 is asymptotically stable at z*, where l�i ¼ ðy�i Þ2 ði ¼ 1; . . . ;mÞ and z*T = (x*T, y*T). Proof. From the second order smoothness of / around z*, we have /ðzÞ ¼ /ðz�Þ þ Kðz�ÞdðzÞ þHðdðzÞÞ; 4: z Þ ¼ �2yr e r rgrðx Þ 0 0 0rþ1 0 2yrþ1e r�1g1ðx�Þrg1ðx�Þ 6666 7777 : � .. . . . . . . . � r�1g ðx�Þ � T 6666 7777 et (x*,l*) be a Kuhn–Tucker point of (1.1), the Jacobian uniqueness conditions hold at (x*,l*). Let 1, . . . , r}, r 6 m, we obtain r2xxF rðx�; y�Þ 2y1er �1g1ðx�Þrg1ðx�Þ � � � 2y�r er �1grðx�Þrgrðx�Þ 0 � � � 0 �2y�1er �1g1ðx�Þrg1ðx�ÞT 0 0 2 6666 3 7777 k :¼ k + 1, go to Step 2. Theorem 3.3. Let problem functions of problem (1.1) be twice continuously differentiable. Let {zk} be generated {zk} converge to a solution to E(z) = 0. where by co there conclude that any cluster point of {z } is the solution of E(z) = 0. h Theorem 3.4. Let conditions of Theorem 2.1 be satisfied at the point (x*,l*), K(z) satisfies a Lipschitz condition in a n an int conve � � L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 1341 for sufficiently large K, when kP K, we have kzk � z�k 6 r0; zk 2Nðz�Þ; ð3:7Þ where r0 ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1� 2rp c0=ðc1c3c2ð2þ c3c1ÞÞ, and therefore �1 � kKðzÞ�1k 6 c3; z 2Nðz�Þ ð3:6Þ kKðzÞ � Kðz Þk 6 c2kx� x k; kHðzÞk ¼ kHðzÞ � Hðz�ÞkP c0kz� z�k; kHðzÞk ¼ kHðzÞ � Hðz�Þk 6 c1kz� z�k; Proof. Let H(z) = /(z), then H (z*) = 0n+m. From Lemma 3.1 we have that K(z*) is a nonsingular matrix, then there are constants c0 > 0, c1 > 0, c2 > 0, c3 > 0, and a neighborhoodNðz�Þ, such that eighborhood of z*. Assume that the sequence {zk} generated by Algorithm 3.1 converges to z*, then there is eger K large enough such that hk = 1 for k > K, the sequence {zk} generated by Algorithm 3.1 with r > 0 rges quadratically to z*, where l�i ¼ ðy�i Þ2 ði ¼ 1; . . . ;mÞ, z* = (x*, y*). k fore Eð�zÞ ¼ 0, which contradicts with the assumption Eð�zÞ 6¼ 0. Therefore we must have Eð�zÞ ¼ 0. We 0 6 ð2� 2rÞEð�zÞ 6 0 mbining (2.13) and (2.14), we obtain that Assume that Eð�zÞ 6¼ 0, then from the first inequality above, we have for j !1, bkj ! 0. So from the second inequality, we obtain Eðzkj þ bkj�1dkjÞ � EðzkjÞ bkj�1 > �2rEðzkjÞ; hrEð�zÞ; �diP �2rEð�zÞ; ð3:4Þ where �d ¼ �Kð�zÞ�1/ð�zÞ. Since hrEð�zÞ; �di ¼ �h2Kð�zÞ/ð�zÞ;Kð�zÞ�1/ð�zÞi ¼ �2k/ð�zÞk2 ¼ �2Eð�zÞ ð3:5Þ zkj ! �z; {kj} � {1,2,3 ,. . .}. From Armijo line search rule, we have that EðzkjÞ � Eðzkj þ bkjdkjÞP 2rbkjEðzkjÞ; EðzkjÞ � Eðzkj þ bkj�1dkjÞ < 2rbkj�1EðzkjÞ: Proof. Obviously, we have that the sequence {E(zk)} is monotone decreasing and convergent. Let �z be a clus- ter point of {zk}, then there is fzkjg, such that by Algorithm 3.1 satisfying that for any cluster point z of this sequence, K(z) is nonsingular. Then the iterations kdkk ¼ kKðzkÞ HðzkÞk 6 c3c1kzk � z k: ð3:8Þ 6 2c kz � z�kkd k þ c kd k2 hence instan 4. Co To Rung to co 1342 L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 Table 1 gives numerical results of Algorithm 3.1 and the Runge–Kutta method on four test problems in which in th 0:481 we fin To same Kutta Refer [1] K Be [2] A N Algorithm 3.1 becomes the classical Newton method and the rate of convergence is quadratic, see for ce [13]. h mputation results verify the computational efficiency, we implement a code complied based on Algorithm 3.1 and the e–Kutta method for the differential equation system (3.1). The test problems chosen from [14] are used mpare the numerical performances of these two approaches. which implies that the step length hk = 1 will satisfy Armijo line search rule when kP K in Algorithm 3.1 and EðzkÞ � Eðzk þ dkÞ � 2qEðzkÞ ¼ ð1� 2rÞkHðzkÞk2 � kHðzk þ dkÞk2 P ð1� 2qÞc20kzk � z�k2 � ½c1c3c2ð2þ c3c1Þ�2kzk � z�k4 P ½ð1� 2qÞc20 � ½c1c3c2ð2þ c3c1Þ�2r20�kzk � z�k2 P 0; ð3:10Þ 2 k k 2 k 6 c1c3c2ð2þ c3c1Þkzk � z�k2; ð3:9Þ therefore, when kP K, We can deduce from (3.6)–(3.8) that kHðzk þ dkÞk ¼ kHðzk þ dkÞ � HðzkÞ þ KðzkÞdkk 6 sup 06t61 kðKðzk þ tdkÞ � KðzkÞÞdkk 6 sup 06t61 kðKðzk þ tdkÞ � Kðz�ÞÞdkk þ kðKðzkÞ � Kðz�ÞÞdkk Table 1 Numerical results No. Algorithm IT k/(z)k2 Accuracy Time (s) 1 [14,Tp113] R (3.1) 80 1.595012 · 10�8 8.4655 · 10�6 4504.7 n = 10, m = 8 3.1 21 8.886450 · 10�8 2.6308 · 10�5 2752.3 2 [14,Tp108] R (3.1) 83 6.449789 · 10�11 0.0328 6968.8 n = 9, m = 14 3.1 20 9.917368 · 10�11 0.0328 383.1880 3 [14,Tp100] R (3.1) 128 1.156948 · 10�6 3.7042 · 10�5 2882.3 n = 7, m = 4 3.1 15 3.114798 · 10�6 6.7895 · 10�5 56.3750 4 [14,Tp45] R (3.1) 62 4.833404 · 10�9 7.0017 · 10�5 40.4530 n = 5, m = 10 3.1 17 9.219.51 · 10�9 1.0039 · 10�4 10.7180 n stands for the dimension of variables, m for the number of constraints, IT for the number of iterations and R (3.1) for Runge–Kutta algorithm. , for the sake of brevity, only six significant digits of the solutions are given. Problem Tp108 is interesting e sense that we obtained the approximate solution x�com ¼ ð0:8764; 0:4817; 0:0210; 0:9998; 0:8764; 7; 0:0210; 0:9998; 0:0000Þ and f �com ¼ �0:8660257, which is better than the one given in [14]. Actually d that most of our numerical results are considerably more accurate than those reported in [14]. compare the numerical results obtained by Algorithm 3.1 and Runge–Kutta algorithm, we adopted the the initial point and r in computing each of the problems. The numerical results given show that Runge– method has good stability and high precision and Algorithm 3.1 is fast convergent. ences .J. Arrow, L. Hurwicz, Reduction of constrained maxima to saddle point problems, in: J. Neyman (Ed.), Proceedings of the 3rd rkeley Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley, 1956, pp. 1–26. .V. Fiacco, G.P. Mccormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques, John Wiely, ew York, 1968. [3] Yu.G. Evtushenko, Two numerical methods of solving nonlinear programming problems, Sov. Math. Dokl 15 (2) (1974) 420–423. [4] H. Yamadhita, A differential equation approach to nonlinear programming, Math. Prog. 18 (1980) 115–168. [5] P.Q. Pan, New ODE methods for equality constrained optimization (1)-equations, J. Comput. Math. 10 (1) (1992) 77–92. [6] Yu.G. Evtushenko, Numerical Optimization Techniques, Optimization Software, Inc. Publication Division, New York, 1985. [7] Yu.G. Evtushenko, V.G. Zhadan, Barrier-projective methods for nonlinear programming, Comp. Maths Math. Phys. 34 (5) (1994) 579–590. [8] Yu.G. Evtushenko, V.G. Zhadan, Stable barrier-projection and barrier-Newton methods in nonlinear programming, Opt. Software 3 (1994) 237–256. [9] Yu.G. Evtushenko, V.G. Zhadan, Stable barrier-projection and barrier-Newton methods for linear and nonlinear programming, in: E. Spedicato (Ed.), Algorithms for Continuous Optimization, Kulwer Academic Publishers, 1994, pp. 255–285. [10] Yu.G. Evtushenko, V.G. Zhadan, Stable barrier-projection and barrier-Newton methods in linear programming, Comput. Opt. Appl. 3 (1994) 289–303. [11] L.W. Zhang, A modified version to the differential system of Evtushenko and Zhanda for solving nonlinear programming, in: Ya-xiang Yuan (Ed.), Numerical Linear Algebra and Optimization, Science Press, 1999, pp. 161–168. [12] R. Cominetti, J.P. Dussault, A stable exponential-algorithm with super-linear convergence, J. Optim. Theor. Appl. 2 (83) (1994) 285– 309. [13] J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [14] W. Hock, K. Schittkowski, Test examples for nonlinear programming codes’, Lecture Notes Econom. Math. Syst. 187 (1981). L. Jin et al. / Applied Mathematics and Computation 188 (2007) 1334–1343 1343 Two differential equation systems for inequality constrained optimization Introduction The first differential system The second differential system Computation results References


Comments

Copyright © 2024 UPDOCS Inc.