Adams methods. Multi-step methods Solving second order differential equations by the Adams method

Subscribe
Join the “koon.ru” community!
In contact with:

Another way to construct difference schemes is based on the fact that to calculate the value, the results of not one, but k previous steps, i.e. values yi.In this case it turns out k-step method.

Multi-step methods can be constructed as follows. Let us write the original equation (1.9) in the form

(1.28)

Let's integrate both sides of this equation over X on the segment. Integral from the left side

(1.29)

To calculate the integral of the right side of equation (1.28), first construct an interpolation polynomial Pk-1 degree k - 1 to approximate a function on a segment based on the values ​​of . After this you can write

(1.30)

Equating the expressions obtained in (1.29) and (1.30), we obtain a formula for determining the unknown value of the grid function at the node:

Based on this formula, you can build various multi-step methods of any order of accuracy. The order of accuracy depends on the degree of the interpolation polynomial, for the construction of which the values ​​of the grid function calculated on k previous steps.

A widely used family of multi-step methods are Adams methods. The simplest of them, obtained by k = 1, coincides with the previously considered Euler method of first order accuracy. In practical calculations, a version of the Adams method is most often used, which has a fourth order of accuracy and uses the results of the previous four at each step. This is what is usually called the Adams method. Let's consider this method.

Let the values ​​be found in four consecutive nodes (k = 4). In this case, there are also the previously calculated values ​​of the right-hand side where . As an interpolation polynomial R 3(X) we can take Newton's polynomial (see Section 2.3). In case of constant step h finite differences for the right side at a node Xi look like

Then the fourth-order difference scheme of the Adams method can be written after the necessary transformations in the form

Comparing the Adams method with the Runge–Kutta method of the same accuracy, we note its efficiency, since it requires the calculation of only one value of the right-hand side at each step (in the Runge–Kutta method - four). But the Adams method is inconvenient because it is impossible to start counting using only one known value y0 . Calculation can only be started from a node X 3, but not x 0..Values y 1, y 2, y 3,required for calculation y 4 must be obtained in some other way (for example, the Runge–Kutta method), which significantly complicates the algorithm. In addition, the Adams method does not allow (without complicating the formulas) changing the step h during the counting process; One-step methods do not have this drawback.

Let's look at another family of multi-step methods that use implicit schemas - methods of forecasting and correction(they are called also using “predictor-corrector” methods).The essence of these methods is as follows. At each step, two steps are introduced using multi-step methods: using an explicit method (predictor) using the known values ​​of the function in previous nodes, they find the initial approximation in the new node; using the implicit method (corrector), As a result of iterations, approximations are found.

One of the variants of the forecast and correction method can be obtained based on the fourth-order Adams method. Let us present the final form of the difference relations at the predictor stage:

at the proofreading stage:

(1.33)

Explicit scheme (1.32) is used once at each step, and with the help of implicit scheme (1.33) an iterative calculation process is built yi+1 because this value is on the right side of the expression

Note that in these formulas, as in the case of the Adams method, when calculating yi+1 the grid function values ​​at the four previous nodes are needed: yi, yi-1, yi-2, yi-3. Therefore, calculation using this method can only start from the value y 4. Necessary y 1, y 2, y 3 are found using the Runge–Kutta method, y0 is given by the initial condition. This is a characteristic feature of multi-step methods. The algorithm for solving the Cauchy problem using the considered method of prediction and correction is presented in enlarged form in Fig. 1.4.

Rice. 1.4. Predictor-corrector method

Currently, Adams' methods are among the most promising numerical integration methods for solving the Cauchy problem. It is proven that when applying multi-step Adams numerical methods to solve the Cauchy problem up to the 12th order, the stability region decreases. With a further increase in order, the stability region, as well as the accuracy of the method, increases. In addition, with the same accuracy, multi-step methods at one integration step require fewer calculations of the right-hand sides of differential equations than in the Runge-Kutta methods. The advantages of Adams' methods include the fact that they easily change the integration step and the order of the method.

In practice, two types of Adams methods are widely used - explicit and implicit. Explicit methods are known as Adams-Bashforth methods, implicit methods are known as Adams-Moulton methods.

Let us consider the use of numerical methods to solve the Cauchy problem

When solving problem (2.1) using one-step methods, the value of yn+1 depends only on the information at the previous point xn. It can be assumed that greater accuracy can be achieved if information about several previous points xn, xn-1... xn-k is used. Multi-step methods are based on this idea.

Most multi-step methods arise from the following approach. If we substitute the exact solution y (x) into equation (2.1) and integrate the equation on the segment , we obtain:

Replacing the function f (x, y (x)) in formula (2.2) with the interpolation polynomial P (x), we obtain an approximate method

In order to construct the polynomial P (x), we assume that yn, yn-1... yn-k are approximations to the solution at points xn, xn-1... xn-k. We assume that the nodes xi are located uniformly with a step h. Then fi=f (xi, yi), (i=n, n-1.. n-k) are approximations to f (x, y (x)) at points xn, xn-1... xn-k.

As P (x), we take an interpolation polynomial of degree k satisfying the conditions

If we integrate this polynomial explicitly, we get the following method:

When k=0, the polynomial P (x) is a constant equal to fn, and formula (2.4) turns into the usual Euler method.

For k=1, the polynomial P (x) is a linear function passing through the points (xn-1, fn-1) and (xn, fn), i.e.

Integrating this polynomial from xn to xn+1, we obtain a two-step method

which uses information at two points xn and xn+1.

If k=2, then P(x) is a quadratic polynomial interpolating the data (xn-2, fn-2), (xn-1, fn-1) and (xn, fn). It can be shown that the corresponding method has the form

If k=3, then the corresponding method is given by the formula

For k=4 we have

Note that method (2.7) is a three-step method, (2.8) is a four-step method, and (2.9) is a five-step method. Formulas (2.6) - (2.9) are known as the Adams-Bashforth methods. Method (2.6) has second order accuracy, so it is called the second order Adams-Bashforth method. Similarly, methods (2.7), (2.8) and (2.9) are called third-, fourth- and fifth-order Adams-Bashforth methods, respectively.

Continuing this process, using an increasing number of previous points, as well as an interpolating polynomial of a higher degree, we obtain Adams-Bashforth methods of arbitrarily high order.

Multi-step methods introduce difficulties that do not arise with single-step methods. These difficulties become clear if, for example, we turn to the fifth-order Adams-Bashforth methods (2.9).

In problem (2.1) the initial value y0 is given, but with n=0, to calculate using formula (2.9), information is needed at points x-1, x-2, x-3, x-4, which is naturally missing. The usual way out of this situation is to use some one-step method of the same order of accuracy, such as the Runge-Kutta method, until enough values ​​are obtained for the multi-step method to work. Or you can use a one-step method on the first step, a two-step method on the second, and so on until all the starting values ​​are obtained. It is essential that these starting values ​​be calculated with the same degree of accuracy as the final method will work with. Since the starting methods have a lower order of accuracy, you have to count in smaller increments at the beginning and use more intermediate points.

The derivation of methods (2.6) - (2.9) is based on replacing the function f (x, y) with an interpolation polynomial P (x). It is known that there is a theorem that proves the existence and uniqueness of the interpolation polynomial. If the nodes x0, x1... xn are different, then for any f0, f1... fn there is a unique polynomial P (x) of degree not higher than n such that P (xi) =fi, i=0, 1,.. n.

Although the interpolation polynomial is unique, there are several ways to represent this polynomial. Lagrange polynomials are most often used, but they also turn out to be inconvenient if you need to add (or remove from it) any node to a data set. In this case, there is a different representation of the interpolation polynomial. This is Newton's representation

The polynomial Pn+1 (x) can be written as

The representation of the interpolation polynomial in the form (2.11) in a number of cases can be especially useful for practice.

Adams-Bashforth methods use already known values ​​at points xn, xn-1... xn-k. When constructing an interpolation polynomial, you can also use the points xn, xn, xn-1... xn-k. This gives rise to a class of implicit m-step methods known as Adams-Moulton methods.

If k=0, then P (x) is a linear function passing through the points (xn, fn) and (xn+1, fn+1), and the corresponding method

is the second order Adams-Moulton method.

For k=1, 2, 3 we obtain the corresponding methods

third, fourth and fifth orders of approximation. Relations (2.12) - (2.15) contain the desired values ​​of yn+1 implicitly, therefore, to implement them it is necessary to use iterative methods.

In practice, they usually do not directly solve equations (2.12) - (2.15), but use the explicit and implicit forms together, which leads to a method of prediction and correction.

For example, for the second-order Adams method, using the notation where r is the iteration number, we have the following calculation scheme for r = 1:

This process is called the PECE method (P means applying a predictive formula, C means applying a correction formula, E means calculating the function f). You can shorten the calculation process by discarding the last formula. This leads to the so-called PEC method.

Let's consider the second method for solving equations (2.12) - (2.15). Formulas (2.12) - (2.15) can be rewritten as

where gn contains known quantities. It has been proven that if, where L is the Lipschitz constant, then there is a unique solution to equation (2.17), which can be obtained using the iterative process

where - arbitrarily.

Iterations in expression (2.18) continue until convergence is achieved. In this case, the number of calculations of the function f varies from point to point and can be quite large.

On the other hand, if the value of h is reduced, then convergence can be achieved in a fixed number of iterations. This method is called correction to convergence.

At first glance, it may seem that the explicit multi-step method is the simplest method from a computational point of view. However, in practice, explicit methods are used very rarely. The implicit Adams-Moulton method is more accurate than the explicit Adams-Bashforth method. For example, the computational scheme for the 5th order Adams-Moulton method is as follows:

Adams methods up to the fifth order inclusive can be used to solve ordinary differential equations that do not require a high degree of accuracy.

As in the case of the Adams-Bashforth method, when using the Adams-Moulton method, an important issue is the choice of the optimal relationship between the integration step and the order of the method. It should be noted that when creating effective algorithms and programs, increasing the order of the method is preferable to decreasing the integration step.

To solve more complex problems it is necessary to apply higher order Adams methods. Table 2.1 shows the coefficient values ​​for Adams' methods. The first line shows the order of the method; in the second - the values ​​of the coefficients Ck for the corresponding order k; in the following lines - pairs of coefficients Bkj and Mkj for the Adams-Bashforth and Adams-Moulton methods, respectively. Then, taking into account the data in Table 2. 14, the coefficients in j in the expression

for the Adams-Bashforth method of kth order can be found from the relation

and for the Adams-Moulton method of kth order using a similar formula

Formulas for Adams predictor-correction methods from the 6th to the 14th order are as follows:

  • 6th order:
  • 7th order:
  • 8th order:
  • 9th order:
  • 10th order:
  • 11th order:
  • 12th order:
  • 13th order:
  • 14th order:
  • 15th order:
  • 16th order:

It is preferable to use the formulas given above for the practical application of solving ordinary differential equations or systems of first-order differential equations with a constant integration step. If in the process of solving an equation the integration step is variable, then for Adams methods there are special techniques for entering new initial data when changing the integration step.

At S= 1 formula (6.16) takes the form

If Q= 2, we obtain the following computational rule:

Typically, in practice, extrapolation formula (6.18) is used, and then the resulting value is corrected using formula (6.23). And if the result of the adjusted value does not exceed the permissible calculation error, then the step H considered acceptable .

For calculations on a computer, formulas (6.18) and (6.23) in finite-difference form are inconvenient. Taking (6.21) into account, they can be represented in the form

(6.24)

The given formulas are quite accurate. They give an error of order ~ ABOUT(H4), but the error estimation formulas themselves are quite complex. The error can be approximately estimated using Runge's rule.

Example 6.2. Solve a differential equation on a segment with an initial condition Y(X= 0) = 1. Find the Adams method (with correction) at the point X4 , at the first three points find by the Runge-Kutta method, taking the step .

Solution. We take the function values ​​at the first four points from the table. 6.1 (see example in the previous section). Now it became clear why we saved the values ​​of the first derivative at these points (see formulas (6.24)).

X4 = X3 + H= 0.15 + 0.05 = 0.2;

In order to correct the result obtained, it is necessary to calculate the value of the derivative at this point:

Now let’s clarify the value using the interpolation formula (or you don’t have to do this, then the error of the method will be greater):

Since the corrected value is taken as the new value of the function, then Necessarily The value of the derivative should be recalculated. In our case, the modulus of the difference between the extrapolation and interpolation formulas is less than ε , Which allows you to continue calculations with the same step.

Self-test questions

· Formulate the Cauchy problem for first-order ordinary differential equations.

· What is the solution to a differential equation: a) in higher mathematics, b) in applied mathematics?

· What methods of differential equations are called one-step, multi-step? Give examples.

· Compare the values ​​obtained in the first and second steps using the Euler, Runge-Kutta and Taylor series expansion methods (labor intensity, error...).

· How to evaluate the error of the method used? How to reduce it?

· Compare one-step and multi-step methods for solving differential equations, indicating the advantages and disadvantages of the first and second.

· What are Adams' extrapolation and interpolation methods (formulas)?

· Is it possible to use: a) only Adams extrapolation methods,
b) only interpolation?

· Is it possible to use: a) multi-step methods without single-step ones;
b) one-step methods without multi-step ones?

· When solving a differential equation using the Adams method, at the 27th step it is necessary to change the step. How to do it?

We still have the same Cauchy problem.

f (1) (t)=F(t, f(t)), a£ t£ b, f(a)=f a.

In one-step methods the value f(tk+1) was determined only by the information at the previous point tk. It seems possible to increase the accuracy of the solution by using information at several previous points, if available. This is what is done in methods called multi-step. At first glance at the statement of the problem, it becomes obvious that at the moment of start t=t a there is only one initial condition and, if we are going to work with two, three or four previous points, then there is no way to get the second one, except using one-step methods. That's what they do; a “complex” solution algorithm might look like this:

in the first step, the second point is obtained using the one-step method, in the second, the third is obtained using the two-step method, in the third, the fourth is obtained using the three-step method, etc., until enough previous points are collected for the main method that is supposed to be used.

Another option is to obtain the entire starting set of points using a one-step method, such as fourth-order Runge-Kutta. Since multi-step methods are assumed to be more accurate, a larger number of intermediate points are usually used for the starting one-step method, i.e. work with shorter steps.

Multi-step algorithms can be created like this. Considering that

f(tk +1)=f(tk)+ ,

we can numerically integrate the right-hand side of the ODE under the integral sign. If we use the rectangle method (the interpolating polynomial for the integrable function is a constant), we obtain the usual Euler method. If you use 2 points and a first order interpolation polynomial

p(x)= ,

then integration using the trapezoidal method from tk before tk+1 will give the following algorithm:

f(tk +1)=f(tk)+0.5h(3F k-F k -1).

Similarly, for three points we will have a quadratic interpolating polynomial according to the data ( tk -2 , F k -2), (tk -1 , F k -1), (tk, F k) and integration using Simpson's method will give the algorithm:

f(tk +1)=f(tk)+ (23F k–16F k -1 +5F k -2).

For 4 points the polynomial will be cubic and its integration will give:

f(tk +1)=f(tk)+ (55F k–59F k -1 +37F k -2 –9F k -3).

In principle, we could continue this way indefinitely.

The given algorithms are called Adams-Bashforth methods of the second, third and fourth orders.

Formally, when constructing an interpolation polynomial, in addition to N use already calculated points and more R future tk +1 , tk+2 ; in the simplest case the set

tk +1 , tk, tk -1 ,…, tk -N .

This generates a class of so-called Adams-Moulton methods. In the four-step version, it operates on data ( tk +1 , F k +1), (tk, F k), (tk -1 , F k -1), (tk -2 , F k-2) and its algorithm:

f(tk +1)=f(tk)+ (9F k +1 +19F k–5F k -1 +F k -2).

It is, of course, impossible to carry out calculations based on missing data, so the Adams algorithms are combined into a sequence of the Adams-Bashforth and Adams-Moulton algorithms, thereby obtaining the so-called forecast and correction methods. For example, the fourth-order forecast and correction method looks like this: first we predict using the Adams-Bashforth algorithm using “past” points

f(tk +1)=f(tk)+ (55F k–59F k -1 +37F k -2 –9F k -3).

Then we calculate the approximate value of the right side of the equation

F k +1 =F(tk +1 , f(tk +1).

And finally, we adjust f(tk+1) using its approximate value

f(tk +1)=f(tk)+ (9F k +1 +19F k–5F k -1 +F k -2).

The most effective computer programs available that allow the user to change the step size and method order are based on high order Adams methods (over 10). Experience with these programs shows that differences in their implementation can have a more significant impact on accuracy than differences in the internal properties of the methods themselves.

Adams's explicit scheme.

The methods discussed above are explicitly one-step (to find the next approximation, only one previous one is used). The method below is multi-step.

Let the Cauchy problem be given:

For the exact solution (which we do not know) the following holds:

Suppose we know the approximate values ​​of the function u(x) at k points (the starting k points, in particular, can be found by the Euler method or the Runge-Kutta method of one order or another), then the function f (x, u(x)) in ( 2.4.2) for an approximate calculation of the integral can be replaced by an interpolation polynomial of order k-1, constructed over k points, the integral of which is calculated explicitly and is a linear combination of values ​​with certain factors. Thus, we obtain the following recurrent procedure for calculating approximate values ​​of the function u(x) (which is an exact solution of the Cauchy problem) at points:

The described scheme is the k-step explicit Adams formula.

Adams' implicit scheme.

Let be an interpolation polynomial of order k, constructed from k+1 values, one of which, namely, we will consider unknown. Let us modify (2.4.3) by replacing it with a polynomial of a higher degree, the integral of which is expressed as a linear combination of values ​​with some new coefficients:

Formula (2.4.4) is an implicit Adams scheme and is an equation for which can be solved by the method of successive approximations. Naturally, the initial approximation must be wisely chosen. To do this, it is convenient to combine the explicit and implicit Adams schemes into one, called the “correction method”. It is with the help of an explicit scheme that the initial approximation (prediction) is determined, and then, using an implicit scheme, it is corrected the required number of times (usually one or two) by the method of successive approximations until the specified accuracy is achieved (correction).

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”