Numerical Methods for Differential Equations
It is not always possible to obtain the closed-form solution of a differential equation. In this section we introduce numerical methods for solving differential equations, First we treat first-order equations, and in the next section we show how to extend the techniques to higher-order’ equations. The essence of a numerical method is to convert the differential equation into a difference equation that can be programmed on a calculator or digital computer. Numerical algorithms differ partly as a result of the specific procedure used to obtain the difference equations. In general, as the accuracy of the approximation is increased, so is the complexity of the programming involved. Understanding the concept of step size and its effects on solution accuracy is important. To provide a simple introduction to these issues, we begin with the simplest numerical method, the Euler method.
The Euler Method
The Euler method is the simplest algorithm for numerical solution of a differential equation. It usually gives the least accurate results but provides a basis for understanding more sophisticated methods. Consider the equation
where r(t) is a known function. From the definition of the derivative,
If the time increment Δt is small enough, the derivative can be replaced by the approximate expression
Use (8.5-2) to replace (8.5-1) by the following approximation:
Assume that the right side of (8.5-1) remains constant over the time interval (t.t + Δt). Then equation (8.5-3) can be written in more convenient form as follows:
where tk + 1= tk + Δt. The smaller t.t is, the more accurate are our two assumptions leading to (8.5-4). This technique for replacing a differential equation with a difference equation is the Euler method. The increment M is called the step size. The Euler method for the general first-order equation y = ƒ(t, y) is
This equation can be applied successively at the times tk by putting it in a for loop. For example, the following script file solves the differential equation y =ry and plots the solution over the range 0 1 ≤ t ≤ 0.5 for the case where r = – 10 and the initial condition is y(O) = 2. The time constant is T = -1/ r = 0.1, and the true solution is y(t) = 2e-10t. To illustrate the effect of the step size on the solution’s accuracy, we use a step size t.t = 0.02, which is 20 percent of the time constant.
Figure 8.5-1 shows the results. The numerical solution is shown by the small circles. The true solution is shown by the solid line. There is some noticeable error. If we had used a step size equal to 5 percent of the time constant, the error would not be noticeable on the plot.
Numerical methods have their greatest errors when trying to obtain solutions that are rapidly changing. Rapid changes can be due to small time constants or to oscillations. To illustrate the difficulties caused by an oscillating solution, consider the following equation
y = sin t
for y(0) =0 and 0 ≤ t ≤ 4 π. The true solution is y(t) = 1- cos t, and its period is 2π. To compare the results with those obtained from the ode2 3 function later on, we will use a step size equal to 1/13 of the period, or At = 2π /13. The script file is
The results are shown in figure 8.5-2, where the numerical solution is shown by the small circles. There is noticeable error, especially near the peaks and valleys, where the solution is rapidly changing.
The accuracy of the Euler method can be improved by using a smaller step size. However, very small step sizes require longer runtimes and can result in a large accumulated error because of round-off effects. So we seek better algorithms to use for more challenging applications.
The Predictor-Corrector Method
We now consider two methods that are more powerful than the Euler method. The first is a predictor-corrector method. The second method is the Runge-Kutta family of algorithms. In this section we apply these techniques to first-order equations. We then extend them to higher-order equations in the next section.
The Euler method can have a serious deficiency in problems where the variables are rapidly changing because the method assumes the variables are constant over the time interval D.t. One way of improving the method is to use a better approximation to the right side of the equation
The Euler approximation is
Suppose instead we use the average of the right side of (8.5-7) on the interval (tk, tk + 1) This gives
with a similar definition for ƒk+l. Equation (8.5-9) is equivalent to integrating (8.5-7) with the trapezoidal rule.
The difficulty with (8.5-9) is that fk+1 cannot be evaluated until y(tk+l) is known, but this is precisely the quantity being sought. A way out of this difficulty is by using the Euler formula (8.5-8) to obtain a preliminary estimate of y(tk+ I). This estimate is then used to compute fk+l for use in (8.5-9) to obtain the required value of y(tk+l).
The notation can be changed to clarify the method. Let h = f1t and Yk = y(tk), and let Xk+l be the estimate of y(tk+l) obtained from the Euler formula (8.5-8). Then, by omitting the tk notation from the other equations, we obtain the following description of the predictor-corrector process:
This algorithm is sometimes called the modified Euler method. However, note that any algorithm can be tried as a predictor or a corrector. Thus many other methods can be classified as predictor-corrector. For purposes of comparison with the Runge-Kutta methods, we can express the modified Euler method as
For example, the following script file solves the differential equation y = ry and plots the solution over the range 0 ≤ t ≤ 0.5 for the case where r = -10 and the initial condition is y(0) = 2. The time constant is r = -1/ r = 0.1, and the true solution is y(t) = 2e-10t0 To illustrate the effect of the step size on the solution’s accuracy, we use a step size f1t = 0.02, which is 20 percent of the time constant.
Figure 8.5-3 shows the results, with the numerical solution shown by the small circles and the true solution by the solid line. There is less error than with the Euler method using the same step size.
To illustrate how the modified Euler method works with an oscillating solution, consider the equation y = sin t for y(0) = 0 and 0 ≤ t ≤ 4π . To compare the results with those obtained with the Euler method, we will use a step size. Note that because the right side of the ODE is not a function of y, we do not need the Euler predictor (8.5-1 I). The script file is
The results are shown in Figure 8.5-4. The error is less than with the Euler method, but there is still some noticeable error near the peaks, where the solution is rapidly changing. This error can be decreased by decreasing the step size.
The Taylor series representation forms the basis of several methods for solving differential equations, including the Runge-Kutta methods. The Taylor series may be used to represent the solution y(t + h) in terms of y(t) and its derivatives as follows.
The number of terms kept in the series determines its accuracy. The required derivatives are calculated from the differential equation. If these derivatives can be found, (8.5-16) can be used to march forward in time. In practice, the high-order derivatives can be difficult to calculate, and the series (8.5- 16) is truncated at some term, The Runge-Kutta methods were developed because of the difficulty in computing the derivatives. These methods use several evaluations of the function f(t, y) in a way that approximates the Taylor series. The number of terms in the series that is duplicated determines the order of the Runge-Kutta method. Thus a fourth-order Runge-Kutta algorithm duplicates the Taylor series through the term involving h4.
The second-order Runge-Kutta methods express Yk+1 as
The family of second-order Runge-Kutta algorithms is categorized by the parameters a, fJ, WI, and W2. To duplicate the Taylor series through the h2 term, these coefficients must satisfy the following:
Thus one of the parameters can be chosen independently. The family of fourth-order Runge-Kutta algorithms expresses Yk+1 as
Comparison with the Taylor series yields eight equations for the 10 parameters. Thus two parameters can be chosen in light of other considerations. A number of different choices have been used. For example, the classical method, which reduces to Simpson’s rule for integration if f(t, y) is a function of only t, uses the following set of parameters:
MATLAB ODE Solvers ode23 and ode45
In addition to the many variations of the predictor-corrector and Runge-Kutta algorithms that have been developed, some more-advanced algorithms use a variable step size. These algorithms use larger step sizes when the solution is changing more slowly. MATLAB provides functions, called solvers, that implement Runge-Kutta methods with variable step size. These are the ode23, ode45, and odel13 functions. The ode2 3 function uses a combination of second- and third-order Runge-Kutta methods, whereas ode 45 uses a combination of fourt hand fifth-order methods. In general, ode45-is faster and more accurate, but it uses larger step sizes that can produce a solution plot that is not as smooth as the plot produced with ode2 3. These solvers are classified as low order and medium order, respectively. The solver ode 113 is based on a variable-order aIgorithm
MATLAB contains four additional solvers; these are ode23 ode15s, ode23 s, and ode2 3tb. These and the other solvers are categorized in Table 8.5-1. Some of these solvers are classified as “stiff.” A stiff solver is one that is well-suited for solving stiff equations, which are described in Section 8.8
In this section we limit our coverage to first-order equations. Solution of high erorder equations, where y is a vector, is covered in Section 8.6. When used to solve the equation y = f(t, y), the basic syntax is (using ode23 as the example):
where ydot is the name of the function file whose inputs must be t and y and whose output must be a column vector representing dyf dt ; that is, r«. y). The number of rows in this column vector must equal the order of the equation. The syntax for ode23, ode45, and odel13 is identical. The vector tspan contains the starting and ending values of the independent variable t, and optionally, any intermediate values of t where the solution is desired. For example, if no intermediate values are specified, t span is [t 0, t f], where to and t fare the desired starting and ending values of:the independent parameter t. As another example, using t span = [0, 5, ~ 10 1 tells MATLAB to find the solution at t = 5 and at t = 10. You can solve equations backward in time by specifying to be greater than t f. The parameter yO is the value y(to). The function file
must have two input arguments t and y even for equations where f (1, y) is not a function of 1. You need not use array operations in the function file because the ODE solvers call the file with scalar values for the arguments. The basic syntax of ODE solvers is summarized in Table 8.5-2.
As a first example of using a solver, let us solve an equation whose solution is known in closed form so that we can make sure we are using the method correctly.
Effect of Step Size
The spacing used by ode2 3 is smaller than that used by ode4 5 because ode4 5 has less truncation error than ode2 3 and thus can use a larger step size. When the solution changes rapidly, such as with an oscillatory solution, the circles representing the numerical solution do not always yield a smooth curve. Thus ode23 is sometimes more useful for plotting the solution because it often gives a smoother curve. For example, consider the problem
whose analytical solution is y(t) = 1 – cos t. To solve the differential equation numerically, define the following function file:
Figure 8.5-7 shows the solution generated by ode45 (the top graph) and ode23 (the bottom graph). Note the difference in step sizes.
Numerical Methods and Linear Equations
Even though there are general methods available for finding the analytical solutions of linear differential equations, it is nevertheless sometimes more convenient to use a numerical method to find the solution. Examples of such situations are when the forcing function is a complicated function or when the order of the differential equation is higher than two. In such cases the algebra involved in obtaining the analytical solution might not be worth the effort, especially if the main objective is to obtain a plot of the solution. In such cases we can still use the characteristic roots to check the validity of the numerical results. Example 8.5-2 demonstrates this method for an equation with a complicated forcing function
Use of Global Parameters
The function file to be used with the ode functions must have only the two arguments, t and y. Therefore, any additional parameters needed to describe the differential equation cannot be passed through the function call. The global x y z command allows all functions and files using that command to share the values of the variables x, y, and z. Use of the global command. avoids the necessity of changing the values of certain parameters in every file. Example 8.5-2 demonstrates the use of this command.
Solution of Nonlinear Equations
When the differential equation is nonlinear, we often have no analytical solution to use for checking our numerical results. In such cases we can use our physical insight to guard against grossly incorrect results. We can also check the equation for singularities that might affect the numerical procedure. Finally, we can sometimes use an approximation to replace the nonlinear equation with a linear one that can be solved analytically. Although the linear approximation does not give the exact answer, its solution can be used to see whether our numerical answer is “in the ballpark.” Example 8.5-3 illustrates this approach.