A Comparative Study of Some Modifications of CG Methods Under Exact Line Search

Conjugate Gradient (CG) method is a technique used in solving nonlinear unconstrained optimization problems. In this paper, we analysed the performance of two modifications βk and βk and compared the results with the classical conjugate gradient methods ofβk and βk PRP . These proposed methods possesse global convergence properties for general functions using exact line search. Numerical experiments show that the two modifications are more efficient for the test problems compared to classical CG coefficients.


Introduction
Consider the following nonlinear unconstrained optimization problems min{ ( ) ∶ }, where : → is a continuous differentiable function that is bounded below. To solve (1), starting from an initial point 0 , we obtained the next iterative point as follows where is the step-length and is the search direction defined as; The scalar is known as the conjugate gradient parameter and the gradient = ∇ ( ). Some of the classical CG parameters are Fletcher and Reeves (FR) (Fletcher and Reeves,1964) Polak-Ribiere-Polyak (PRP) (Polak and Ribiere, 1969;Polyak, 1969) Hestenes-Steifel (HS) (Hestenes and Stiefel,1952) Conjugate Descent (CD) (Fletcher, 1980), Liu-Storey (LS) method (Liu and Storey, 1991). These parameters and are defined as Most of these classical methods may not generally be convergent but often possess good convergence property. Recently, more research has been published on new modifications of CG methods. For good references of recent CG methods with significant results, refer to Sulaiman et al. (Sulaiman el al, 2018;Sulaiman et al, 2015;Kamilu et al, 2018;Yasir et al, 2018). Thus, in this paper, we analyszed the performance of two modifications of CG coefficients and compared the perfomance with that of classical CG methods of FR and PRP under exact line search. This is done to improve the overall performance of the resulting algorithms.

Two Modifications and Algorithm
In this section, we present two modifications known as (Saliha et al, 2018) and (Hamoda et al, 2017) and defined by, The following algorithm is a general algorithm for solving unconstrained optimization problems.

Global Convergence of the New Modifications
In this section, we prove the global convergence of and under exact line search. We begin with the sufficient descent condition.

Sufficient Descent Condition
For sufficient descent condition to hold, The following theorem is used to show our new modification possess with sufficient descent condition under exact line search.
Theorem 1. Let { } and { }be sequence generated from (2), (3) and the above algorithm, where the step size is determined by the exact line search. Then (6) holds for all k ≥ 0.
Proof.The prove of this theorem can be found in (Saliha et al, 2018;Hamoda et al, 2017).

Global Convergence Properties
In this section, we prove the global convergence properties of the new modification under some assumptions.
Assumption 1 (I) ( ) is bounded from below on the level set R n and is continuous and differentiable in a neighbourhood of the set = { ∈ , ( ) ≤ ( 0 )} at the initial point x 0 . (II) The gradient ( ) is Lipschitz continuous in , so there exists a constant > 0 such that ‖g(x) − g(y)‖ ≤ L‖x − y‖, ∀x, y ∈ N.
The following Lemma by Zoutendijk (Zoutendijk, 1970) is used to prove the global convergence. Lemma 1. Suppose that Assumption 1 holds true. Consider any CG method of the form (3) where is the search direction. Then, Zoutendijk condition holds, that is, The following theorem is based on Lemma 1.

Theorem 2
Suppose assumption 1 holds true, consider and methods of the form (2) and (3) where is obtained using exact line search, then, →∞ ‖ ‖ = 0 (8) Proof.The prove of this theorem can be found in (Saliha et al, 2018;Hamoda et al, 2017).

Numerical Results
In this section, we report the detailed numerical results based on the comparisons of our new modifications with classical CG Algorithms of FR and PRP. All algorithms are implemented under exact line search. We selected 31 test functions from Andrei (Andrei, 2008) with different dimensions. We considered = 10 −6 and‖ ‖ ≤ 10 −6 to be the stopping criteria as suggested by Hillstrom (Hillstrom, 1977). In all cases, we used four initial points. These four initial points lead us to test the global convergence of the new modifications. All algorithms are coded on MATLAB Version R2014a. The test was run on an Intel(R) Core™ i5-M520 (2.40GHz), 4GB for RAM memory and Windows 7 Professional operating system. The Numerical results are based on number of iterations and CPU time as presented in Table 1. Also, the performance results of these methods are shown in Figure 1 and Figure 2 respectively, using a performance profile introduced by Dolan and More (Dolan and Moré, 2002).   From the above figures, it can be seen that the new modifications are better than the classical methods of FR and PRP based on number of iteration and CPU times. The results show that the proposed method solves 99.4% and 98% of the test problems respectively. Meanwhile, FR method solves about 72% while PRP solves 97% of the test problems. Hence, we can say that and are more efficient with robust performance.

Conclusion
In this paper, we presented the performance of two new modifications for nonlinear unconstrained optimization and compared them with the classical CG coefficient of FR and PRP methods under exact line search. These proposed methods possess the global convergence condition under some assumptions and satisfy the sufficient descent condition. Numerical results showed that the proposed methods outperformed the classical methods of FR and PRP in solving the standard unconstrained optimization problems.