On Solving Differential Equation of Conservative Problems-type in Real Dynamical Problems via Low Memory Approach

Real dynamical problems gives rise to interesting differential equations that must be handled with efficient iterative method. Very often the differential equations are discretized to nonlinear equations and solved via Newton-like methods. These methods require to compute and store the Jacobian matrix as well as solving linear system in each iteration. In this paper we present a new approach that based on approximating the Jacobian into a nonsingular diagonal matrix by means of variational technique. Numerical performance on well-known benchmarks conservative problems that demonstrates the reliability and efficiency of our approach are reported


Introduction
Many real-life problems (e.g.Robotics, Radiative transfer, Chemistry, Economics, operational research, physics, statistics, engineering, and social sciences etc.) require the solution of systems of nonlinear equations in the form  () = 0 (1) where F: R n → R n is a nonlinear mapping.Often, the mapping, F is assumed to satisfying the following assumptions: A1.There exists an x * ∈ R n s.t F (x * ) = 0; A2.F is a continuously differentiable mapping in a neighborhood of x * ; A3.  ′ ( * ) is invertible.
The famous method for finding the solution to (1), is the Newton's method which generates a sequence of iterates {  }from a given initial guess x0 via

Computing and Applied Mathematics
+1 =   −  ′ (  ) −1 (  ) (2) where  ′ (  ) is the Jacobian for k = 0, 1, 2 . . .The attractive feature of this method is that, from any initial point x0 in the neighbourhood of solution, the convergence is quadratic (Argyros. 1993).Notwithstanding, an iteration of (2) turns to be expensive, because it requires to compute and store the Jacobian matrix, as well as solving Newton's system which is a linear system in each iteration.Which is impracticable as the dimension of the systems increases.This disadvantage and some others have attracted the attention of many researchers over time.
There are quite a number of revised Newton-type methods being introduced, which include fixed Newton, inexact Newton, quasi-Newton, etc. to diminish the weakness of Newton's method.
Quasi-Newton method is the major variant of Newton-type methods, it replaces the Jacobian or its inverse with an approximation which can be updated at each iteration (Broyden, 1965) and is given as where the matrix Bk is the approximation of the Jacobian at x_k.The main idea behinds quasi-Newton's method is to eliminate the evaluation cost of the Jacobian matrix, in which if function evaluations are very expensive, the cost of finding a solution by quasi-Newton's methods could be much smaller than some other Newton-like methods (Kelly and Northrup, 1988).
Various Jacobian approximations matrices such as the Broyden's method (Broyden, 1965;Broyden, 1967) are pro-posed.However the most critical part of such methods is storing the full-matrix of the approximate Jacobian, which can be a very expensive task as the dimension of systems increases (Kelly, 1995).In this paper we suggest an alternative approximation to the Jacobian into a diagonal matrix by means of variational techniques.The anticipation has been to reduce computational cost, storage requirement, CPU time and floating points operations.The proposed method works efficiently and the results so far are very encouraging.This paper is arranged as follows: Section 2 presents the Conservative Problems, we present our propose method in section 3, Numerical results are reported in section 4, and finally conclusion is given in section 5.

The Conservative Problems
In real dynamical problem, mostly energy is dissipated through some form of friction.Nevertheless, in some circumstances the process is so negligible that it can be neglected over relatively short periods of time.In this situation we employ the law of conservation of energy.Therefore, any systems of this type is known as conservative systems (Hernandez et al., 2005;Gutierrez and Hernandez, 1997;Hernandez and Salanova, 1998;Gutierrez and Hernandez 1998;Dennis and Schnabel, 1983).
Let (0) = 0 and θ(0) = 0 for any arbitrary functions  and θ.The general equation of motion of a mass m, a restoring force −ρ(x) and a damping force −α(dx/dt), is given which can be rewritten as Hence, (5) can be considered as the basic equation of nonlinear mechanics.Our main focus of this paper is on the case where there is no dissipation of energy, invariably the damping force is zero, that is with a boundary condition For more applications and discussions about (5), see (Dennis and Schnabel, 1983).Here, we are to find a solution of problem ( 6) and ( 7) for t ∈ [0, 1]. ( 6) and ( 7) can be transform to Hernandez et al. ( 2005) with a boundary condition We proceed by transforming (8) to systems of nonlinear equations by we replacing the derivative in the differential equation with their finite-difference approximations, that is where ℎ = 1  and  is the dimension of the systems.Substituting (10) into (8), we have From the boundary value,  = 0, hence we obtained the final discretized nonlinear systems as We have noted that, this is not the first attempt to solve conservative problem using iterative methods.Hernandez et al. (2005) has obtained the solution using secant like method but only when n = 10.Nevertheless, in this paper we are mainly concern on large-scale i.e., say 100 ≤ n ≤ 5000.We consider p = 0.5, Q = 0.25, Q = 0.125, x0 = 0, x0 = 0.2 and  0 = 0.3.
The most difficult part in solving the discretized nonlinear equations arises dramatically as dimension increases.Much effort has been devoted to overcome this difficulty.We will show that the performance of our algorithm is not affected by this difficulty.

The Proposed Low Memory Method (LMDC)
A new diagonal updating method for solving systems of nonlinear equations has been presented in this section.The main idea is to approximate Jacobian into nonsingular diagonal matrix which can be updated in each iteration.
We start by approximating the Jacobian  ′ (  ) matrix by a diagonal matrix says Dk.
′ (  ) ≈   (11) where   is a given diagonal matrix updated at each iteration.
We proceed by minimizing the deviation between  +1 and   under some norms, hence, in the following theorem, we state the resulting update formula for   .

Theorem 1
Assumed that   is a diagonal matrix and diagonal update of   to be  +1 .Let ∆k =  +1 −   .Supposed that   ≠ 0. Consider the following problem: where ǁ.ǁF denotes the Frobenuis norm.Hence the optimal solution of ( 15) is given by where 2 ) and  is the trace operation.

Proof.
From the fact that, both the objective function and the constraint of ( 15) are convex, the unique solution can be obtained by considering its Lagrangian function as follows: where  is the corresponding Lagrangian multiplier.After little simplifications, we have: = (Ω  2 ).We further rewrite ( 19) as: which completes the proof∎ Therefore, from the above Theorem the best possible updating formula for diagonal matrix Dk+1 is given by To safeguard on possibly very small ǁskǁ and   , we require that ǁskǁ ≥ s1 for some chosen small s1 > 0. Else we set Dk+1 = Dk .Then, Dk+1 is given as: Finally, we present our iterative scheme (LMDC) as follows:

Numerical Result
In this section, we used our proposed method to solve the conservative problem and compare the numerical performance with Broyden's method (BM).The comparison is based upon the following criterion: Number of iterations, CPU time in seconds and storage requirement.The computations are done in MATLAB 7.0 using double precision computer.The stopping criterion used is The symbol "-" is used to indicate a failure due to; 1.The number of iteration is at least 200 but no point of   satisfies (23) is obtained; 2. CPU time in second reaches 500, 3. Insufficient memory to initial the run.LMDC and BM methods are compared in term of CPU time, number of iteration, floating point operations and matrix storage requirements.From Table 1-4, it is comprehensible that, all the method can solve problems when 25 ≤ n ≤ 500 but LMDC methods is able to find the solutions when n > 500.This is a clear indication that, it outperforms the BM method in terms of numerical stability when the dimension of the problem is large.This is due to the fact that LMDC requires very less computational effort and low memory requirement in building the approximation of the Jacobian.Indeed, the size of the update matrix increases in O(n) as the dimension of the system increases, as oppose to BM method that increase in O(n2).In addition, from numerical experience, x = 0 is closer to solution, that is why we have tested our method with x = 0.2 and x = 0.3 respectively.Figure1 and 2 shown that the LMDC's CPU time increases on linearly as the dimension of the systems increases, whereas for BM the rate grow exponentially.This also suggests that our solver is a good alternative when the dimension of the problem is increasing.

𝑑𝑖𝑎𝑔
is a diagonal component of  ,  () is the  ℎ component of the vector   , then Ω  =

Figure 1 .Figure 2 .
Figure 1.Comparison of LMDC and BM methods as the dimension increases in term of CPU time for Q = 0.25 We consider  +1 to be a diagonal updated version of   , then we let  +1 to satisfy the weak secant equation i.e.