Mathematical Considerations

ARKode solves ODE initial value problems (IVPs) in \(\mathbb{R}^N\). At present, such problems should be posed in linearly-implicit form, as

(1)\[M \dot{y} = f_E(t,y) + f_I(t,y), \qquad y(t_0) = y_0.\]

Here, \(t\) is the independent variable (e.g. time), and the dependent variables are given by \(y \in \mathbb{R}^N\), where we use the notation \(\dot{y}\) to denote \(\frac{dy}{dt}\).

\(M\) is a user-specified nonsingular operator from \(\mathbb{R}^N \to \mathbb{R}^N\). This operator is currently assumed to be independent of both \(t\) and \(y\). For standard systems of ordinary differential equations and for problems arising from the spatial semi-discretization of partial differential equations using finite difference, finite volume, or spectral finite element methods, \(M\) is typically the identity matrix, \(I\). For PDEs using other standard finite-element spatial semi-discretizations, \(M\) is typically a well-conditioned mass matrix that is fixed throughout a simulation (except in the case of a spatially-adaptive method, where \(M\) can change between, but not within, time steps).

The two right-hand side functions may be described as:

The time-stepping methods currently supplied with ARKode are designed to solve stiff, nonstiff and mixed stiff/nonstiff problems. Roughly speaking, stiffness is characterized by the presence of at least one rapidly damped mode, whose time constant is small compared to the time scale of the solution itself. In the implicit/explicit (ImEx) splitting above, these stiff components should be included in the right-hand side function \(f_I(t,y)\).

In the sub-sections that follow, we elaborate on the numerical methods that comprise ARKode. We first discuss the features of the top-level ARKode infrastructure, including its usage modes, interpolation module, and rootfinding capabilities. We then discuss the current suite of time-stepping modules supplied with ARKode, including the ARKStep module for additive Runge-Kutta methods and the ERKStep module that is optimized for explicit Runge-Kutta methods. We then discuss the adaptive temporal error controllers shared by the time-stepping modules, including discussion on our choice of norms used within ARKode for measuring errors within various components of the solver.

We conclude this section by discussing the nonlinear and linear solver strategies that the ARKStep time-stepper module uses in solving implicit systems that arise in computing each stage: nonlinear solvers, linear solvers, preconditioners, error control within iterative nonlinear and linear solvers, algorithms for initial predictors for implicit stage solutions, and approaches for handling non-identity mass-matrices.

Adaptive single-step methods

The top-level ARKode infrastructure is designed to support variably-sized, single-step, IVP integration methods, i.e.

\[y_{n} = \varphi(y_{n-1}, h_n)\]

where \(y_{n-1}\) is an approximation to the solution \(y(t_{n-1})\), \(y_{n}\) is an approximation to the solution \(y(t_n)\) where \(t_n = t_{n-1} + h_n\), and the approximation method is represented by the function \(\varphi\).

The choice of step size \(h_n\) is determined by the time-stepping method (based on user-provided inputs, typically accuracy requirements). However, this value may be constrained by user-supplied bounds on the allowed step sizes, through defining the values \(h_\text{min}\) and \(h_\text{max}\) with the functions ARKodeSetMinStep() and ARKodeSetMaxStep() in C/C++, or through the inputs MIN_STEP and MAX_STEP to the function FARKSETRIN() in Fortran, respectively. These default to \(h_\text{min}=0\) and \(h_\text{max}=\infty\).

ARKode may be run in a variety of “modes”:

  • NORMAL – ARKode will take internal steps until it has just overtaken a user-specified output time, \(t_\text{out}\), in the direction of integration, i.e. \(t_{n-1} < t_\text{out} < t_{n}\) for forward integration, or \(t_{n} < t_\text{out} < t_{n-1}\) for backward integration. ARKode will then compute an approximation to the solution \(y(t_\text{out})\) by interpolation (using one of the dense output routines described in the following section Interpolation).
  • ONE-STEP – ARKode will only take a single internal step \(y_{n-1} \to y_{n}\) and then return control back to the calling program. If the step will overtake \(t_\text{out}\) then ARKode will again return an interpolated result; otherwise it will return a copy of the internal solution \(y_{n}\).
  • NORMAL-TSTOP – ARKode will take internal steps until the next step will overtake \(t_\text{out}\). ARKode will then limit this next step so that \(t_n = t_{n-1} + h_n = t_\text{out}\), and once the step completes it will return a copy of the internal solution \(y_{n}\).
  • ONE-STEP-TSTOP – ARKode will check whether the next step will overtake \(t_\text{out}\) – if not then this mode is identical to “one-step”; otherwise it will limit this next step so that \(t_n = t_{n-1} + h_n = t_\text{out}\). In either case, once the step completes it will return a copy of the internal solution \(y_{n}\).

We note that interpolated solutions may be slightly less accurate than the internal solutions produced by the solver. Hence, to ensure that the returned value has full method accuracy one of the “tstop” modes should be used.

Interpolation

As mentioned above, the top-level ARKode infrastructure supports interpolation of solutions \(y(t_\text{out})\) where \(t_\text{out}\) occurs within a completed time step from \(t_{n-1} \to t_n\). Additionally, this module supports extrapolation of solutions to \(t\) outside this interval (e.g. to construct predictors for iterative nonlinear and linear solvers). To this end, ARKode currently supports construction of polynomial interpolants \(p_q(t)\) of polynomial order up to \(q=3\). The order \(q\) may be specified by the user via a call to the function ARKodeSetDenseOrder() in C/C++, or with the DENSE_ORDER argument to FARKSETIIN() in Fortran.

The interpolants generated are either of Lagrange or Hermite form, and use the data \(\left\{ y_{n-1}, f_{n-1}, y_{n}, f_{n} \right\}\), where here we use the simplified notation \(f_{k}\) to denote \(M^{-1} \left(f_E(t_k,y_k) + f_I(t_k,y_k)\right)\). Defining a normalized “time” variable, \(\tau\), for the most-recently-computed solution interval \(t_{n-1} \to t_{n}\) as

\[\tau(t) = \frac{t-t_{n-1}}{h_{n}},\]

we then construct the interpolants \(p_q(t)\) as follows:

  • \(q=0\): constant interpolant

    \[p_0(\tau) = \frac{y_{n-1} + y_{n}}{2}.\]
  • \(q=1\): linear Lagrange interpolant

    \[p_1(\tau) = -\tau\, y_{n-1} + (1+\tau)\, y_{n}.\]
  • \(q=2\): quadratic Hermite interpolant

    \[p_2(\tau) = \tau^2\,y_{n-1} + (1-\tau^2)\,y_{n} + h(\tau+\tau^2)\,f_{n}.\]
  • \(q=3\): cubic Hermite interpolant

    \[p_3(\tau) = (3\tau^2 + 2\tau^3)\,y_{n-1} + (1-3\tau^2-2\tau^3)\,y_{n} + h(\tau^2+\tau^3)\,f_{n-1} + h(\tau+2\tau^2+\tau^3)\,f_{n}.\]

We note that although interpolants of order \(> 3\) are possible, these are not currently implemented due to their increased computing and storage costs. However, these may be added in future ARKode releases.

Rootfinding

The top-level ARKode infrastructure also supports a rootfinding feature. This means that, while integrating the IVP (1), ARKode can also find the roots of a set of user-defined functions \(g_i(t,y)\) that depend on \(t\) and the solution vector \(y = y(t)\). The number of these root functions is arbitrary, and if more than one \(g_i\) is found to have a root in any given interval, the various root locations are found and reported in the order that they occur on the \(t\) axis, in the direction of integration.

Generally, this rootfinding feature finds only roots of odd multiplicity, corresponding to changes in sign of \(g_i(t, y(t))\), denoted \(g_i(t)\) for short. If a user root function has a root of even multiplicity (no sign change), it will almost certainly be missed by ARKode due to the realities of floating-point arithmetic. If such a root is desired, the user should reformulate the root function so that it changes sign at the desired root.

The basic scheme used is to check for sign changes of any \(g_i(t)\) over each time step taken, and then (when a sign change is found) to hone in on the root (or roots) with a modified secant method [HS1980]. In addition, each time \(g\) is evaluated, ARKode checks to see if \(g_i(t) = 0\) exactly, and if so it reports this as a root. However, if an exact zero of any \(g_i\) is found at a point \(t\), ARKode computes \(g(t+\delta)\) for a small increment \(\delta\), slightly further in the direction of integration, and if any \(g_i(t+\delta) = 0\) also, ARKode stops and reports an error. This way, each time ARKode takes a time step, it is guaranteed that the values of all \(g_i\) are nonzero at some past value of \(t\), beyond which a search for roots is to be done.

At any given time in the course of the time-stepping, after suitable checking and adjusting has been done, ARKode has an interval \((t_\text{lo}, t_\text{hi}]\) in which roots of the \(g_i(t)\) are to be sought, such that \(t_\text{hi}\) is further ahead in the direction of integration, and all \(g_i(t_\text{lo}) \ne 0\). The endpoint \(t_\text{hi}\) is either \(t_n\), the end of the time step last taken, or the next requested output time \(t_\text{out}\) if this comes sooner. The endpoint \(t_\text{lo}\) is either \(t_{n-1}\), or the last output time \(t_\text{out}\) (if this occurred within the last step), or the last root location (if a root was just located within this step), possibly adjusted slightly toward \(t_n\) if an exact zero was found. The algorithm checks \(g(t_\text{hi})\) for zeros, and it checks for sign changes in \((t_\text{lo}, t_\text{hi})\). If no sign changes are found, then either a root is reported (if some \(g_i(t_\text{hi}) = 0\)) or we proceed to the next time interval (starting at \(t_\text{hi}\)). If one or more sign changes were found, then a loop is entered to locate the root to within a rather tight tolerance, given by

\[\tau = 100\, U\, (|t_n| + |h|)\qquad (\text{where}\; U = \text{unit roundoff}).\]

Whenever sign changes are seen in two or more root functions, the one deemed most likely to have its root occur first is the one with the largest value of \(\left|g_i(t_\text{hi})\right| / \left| g_i(t_\text{hi}) - g_i(t_\text{lo})\right|\), corresponding to the closest to \(t_\text{lo}\) of the secant method values. At each pass through the loop, a new value \(t_\text{mid}\) is set, strictly within the search interval, and the values of \(g_i(t_\text{mid})\) are checked. Then either \(t_\text{lo}\) or \(t_\text{hi}\) is reset to \(t_\text{mid}\) according to which subinterval is found to have the sign change. If there is none in \((t_\text{lo}, t_\text{mid})\) but some \(g_i(t_\text{mid}) = 0\), then that root is reported. The loop continues until \(\left|t_\text{hi} - t_\text{lo} \right| < \tau\), and then the reported root location is \(t_\text{hi}\). In the loop to locate the root of \(g_i(t)\), the formula for \(t_\text{mid}\) is

\[t_\text{mid} = t_\text{hi} - \frac{g_i(t_\text{hi}) (t_\text{hi} - t_\text{lo})}{g_i(t_\text{hi}) - \alpha g_i(t_\text{lo})} ,\]

where \(\alpha\) is a weight parameter. On the first two passes through the loop, \(\alpha\) is set to 1, making \(t_\text{mid}\) the secant method value. Thereafter, \(\alpha\) is reset according to the side of the subinterval (low vs high, i.e. toward \(t_\text{lo}\) vs toward \(t_\text{hi}\)) in which the sign change was found in the previous two passes. If the two sides were opposite, \(\alpha\) is set to 1. If the two sides were the same, \(\alpha\) is halved (if on the low side) or doubled (if on the high side). The value of \(t_\text{mid}\) is closer to \(t_\text{lo}\) when \(\alpha < 1\) and closer to \(t_\text{hi}\) when \(\alpha > 1\). If the above value of \(t_\text{mid}\) is within \(\tau /2\) of \(t_\text{lo}\) or \(t_\text{hi}\), it is adjusted inward, such that its fractional distance from the endpoint (relative to the interval size) is between 0.1 and 0.5 (with 0.5 being the midpoint), and the actual distance from the endpoint is at least \(\tau/2\).

Finally, we note that when running in parallel, the ARKode rootfinding module assumes that the entire set of root defining functions \(g_i(t,y)\) is replicated on every MPI task. Since in these cases the vector \(y\) is distributed across tasks, it is the user’s responsibility to perform any necessary inter-task communication to ensure that \(g_i(t,y)\) is identical on each task.

Additive Runge-Kutta methods

The ARKStep time-stepping module in ARKode utilizes variable-step, embedded, additive Runge-Kutta methods (ARK), corresponding to algorithms of the form

(2)\[\begin{split}M z_i &= M y_{n-1} + h_n \sum_{j=1}^{i-1} A^E_{i,j} f_E(t^E_{n,j}, z_j) + h_n \sum_{j=1}^{i} A^I_{i,j} f_I(t^I_{n,j}, z_j), \quad i=1,\ldots,s, \\ M y_n &= M y_{n-1} + h_n \sum_{i=1}^{s} \left(b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right), \\ M \tilde{y}_n &= M y_{n-1} + h_n \sum_{i=1}^{s} \left( \tilde{b}^E_i f_E(t^E_{n,i}, z_i) + \tilde{b}^I_i f_I(t^I_{n,i}, z_i)\right).\end{split}\]

Here \(y_n\) correspond to computed approximations of \(y(t_n)\), \(\tilde{y}_n\) are embedded solutions (used in error estimation; typically lower-order-accurate), and \(h_n \equiv t_n - t_{n-1}\) is the step size. The internal stage times are abbreviated using the notation \(t^E_{n,j} = t_{n-1} + c^E_j h_n\) and \(t^I_{n,j} = t_{n-1} + c^I_j h_n\). The ARK method is primarily defined through the coefficients \(A^E \in \mathbb{R}^{s\times s}\), \(A^I \in \mathbb{R}^{s\times s}\), \(b^E \in \mathbb{R}^{s}\), \(b^I \in \mathbb{R}^{s}\), \(c^E \in \mathbb{R}^{s}\) and \(c^I \in \mathbb{R}^{s}\), that correspond with the explicit and implicit Butcher tables. Additional coefficients \(\tilde{b}^E \in \mathbb{R}^{s}\) and \(\tilde{b}^I \in \mathbb{R}^{s}\) are used to construct the embedded solution that is used to estimate error for adaptive time-stepping. We note that ARKStep currently enforces the constraint that the explicit and implicit methods in an ARK pair must share the same number of stages, \(s\).

The user of ARKStep must choose appropriately between one of three classes of methods: ImEx, explicit and implicit. All of ARKode’s available Butcher tables encoding the coefficients \(c^E\), \(c^I\), \(A^E\), \(A^I\), \(b^E\), \(b^I\), \(\tilde{b}^E\) and \(\tilde{b}^I\) are further described in the Appendix: Butcher tables.

For mixed stiff/nonstiff problems, a user should provide both of the functions \(f_E\) and \(f_I\) that define the IVP system. For such problems, ARKStep currently implements the ARK methods proposed in [KC2003], allowing for methods having order of accuracy \(q = \{3,4,5\}\). The tables for these methods are given in the section Additive Butcher tables.

For nonstiff problems, a user may specify that \(f_I = 0\), i.e. the equation (1) reduces to the non-split IVP

(3)\[M\, \dot{y} = f_E(t,y), \qquad y(t_0) = y_0.\]

In this scenario, the coefficients \(A^I=0\), \(c^I=0\), \(b^I=0\) and \(\tilde{b}^I=0\) in (2), and the ARK methods reduce to classical explicit Runge-Kutta methods (ERK). For these classes of methods, ARKode allows orders of accuracy \(q = \{2,3,4,5,6,8\}\), with embeddings of orders \(p = \{1,2,3,4,5,7\}\). These default to the Heun-Euler-2-1-2, Bogacki-Shampine-4-2-3, Zonneveld-5-3-4, Cash-Karp-6-4-5, Verner-8-5-6 and Fehlberg-13-7-8 methods, respectively.

Finally, for stiff problems the user may specify that \(f_E = 0\), so the equation (1) reduces to the non-split IVP

(4)\[M\, \dot{y} = f_I(t,y), \qquad y(t_0) = y_0.\]

Similarly to ERK methods, in this scenario the coefficients \(A^E=0\), \(c^E=0\), \(b^E=0\) and \(\tilde{b}^E=0\) in (2), and the ARK methods reduce to classical diagonally-implicit Runge-Kutta methods (DIRK). For these classes of methods, ARKode allows orders of accuracy \(q = \{2,3,4,5\}\), with embeddings of orders \(p = \{1,2,3,4\}\). These default to the SDIRK-2-1-2, ARK-4-2-3 (implicit), SDIRK-5-3-4 and ARK-8-4-5 (implicit) methods, respectively.

Explicit Runge-Kutta methods

The ERKStep time-stepping module in ARKode can only be applied to IVP problems of the form

(5)\[\dot{y} = f(t,y), \qquad y(t_0) = y_0.\]

For such problems, ERKStep provides variable-step, embedded, explicit Runge-Kutta methods (ERK), corresponding to algorithms of the form

(6)\[\begin{split}z_i &= y_{n-1} + h_n \sum_{j=1}^{i-1} A_{i,j} f(t_{n,j}, z_j), \quad i=1,\ldots,s, \\ y_n &= y_{n-1} + h_n \sum_{i=1}^{s} b_i f(t_{n,i}, z_i), \\ \tilde{y}_n &= y_{n-1} + h_n \sum_{i=1}^{s} \tilde{b}_i f(t_{n,i}, z_i),\end{split}\]

where the variables have the same meanings as in the previous section. We note that the problem (5) is fully encapsulated in the more general problems (3), and that the algorithm (6) is similarly encapsulated in the more general algorithm (2). While it therefore follows that ARKStep can be used to solve every problem solvable by ERKStep, using the same set of methods, we include ERKStep as a distinct time-stepping module since this simplified form admits a solution process that requires significantly less storage and right-hand side function evaluations than when considering the more general form.

ARKode error norm

In the process of controlling errors at various levels (time integration, nonlinear solution, linear solution), ARKode uses a weighted root-mean-square norm, denoted \(\|\cdot\|_\text{WRMS}\), for all error-like quantities,

(7)\[\|v\|_\text{WRMS} = \left( \frac{1}{N} \sum_{i=1}^N \left(v_i\, w_i\right)^2\right)^{1/2}.\]

The power of this choice of norm arises in the specification of the weighting vector \(w\), that combines the units of the problem with user-supplied values that specify an “acceptable” level of error. To this end, ARKode constructs an error weight vector using the most-recent step solution and the user-supplied relative and absolute tolerances, namely

(8)\[w_i = \frac{1}{RTOL\cdot |y_{n-1,i}| + ATOL_i}.\]

Since \(1/w_i\) represents a tolerance in the ith component of the solution vector \(y\), a vector whose WRMS norm is 1 is regarded as “small.” For brevity, we will typically drop the subscript WRMS on norms in the remainder of this section.

Additionally, for problems involving a non-identity mass matrix, \(M\ne I\), the units of equation (1) may differ from the units of the solution \(y\). In this case, ARKode may also construct a residual weight vector,

(9)\[w_i = \frac{1}{RTOL\cdot | \left[M y_{n-1}\right]_i| + ATOL'_i},\]

where the user may specify a separate absolute residual tolerance value or array, \(ATOL'\). The choice of weighting vector used in any given norm is determined by the quantity being measured: values having solution units use (8), whereas values having equation units use (9). Obviously, for problems with \(M=I\), the weighting vectors are identical.

Time step adaptivity

A critical component of both the ARKStep and ERKStep time-stepping modules, making them IVP “solvers” rather than just time-steppers, is their adaptive control of local truncation error. At every step, we estimate the local error, and ensure that it satisfies tolerance conditions. If this local error test fails, then the step is recomputed with a reduced step size. To this end, every Runge-Kutta method packaged within ARKode admits an embedded solution \(\tilde{y}_n\), as shown in equations (2) and (6). Generally, these embedded solutions attain a slightly lower order of accuracy than the computed solution \(y_n\). Denoting the order of accuracy for \(y_n\) as \(p\) and for \(\tilde{y}_n\) as \(q\), it is the case that most embedded methods satisfy \(p = q-1\). These values of \(p\) and \(q\) correspond to the global orders of accuracy for the method and embedding, hence each admit local truncation errors satisfying [HW1993]

(10)\[\begin{split}\| y_n - y(t_n) \| = C h_n^{q+1} + \mathcal O(h_n^{q+2}), \\ \| \tilde{y}_n - y(t_n) \| = D h_n^{p+1} + \mathcal O(h_n^{p+2}),\end{split}\]

where \(C\) and \(D\) are constants independent of \(h_n\), and where we have assumed exact initial conditions for the step, i.e. \(y_{n-1} = y(t_{n-1})\). Combining these estimates, we have

\[\| y_n - \tilde{y}_n \| = \| y_n - y(t_n) - \tilde{y}_n + y(t_n) \| \le \| y_n - y(t_n) \| + \| \tilde{y}_n - y(t_n) \| \le D h_n^{p+1} + \mathcal O(h_n^{p+2}).\]

We therefore use the norm of the difference between \(y_n\) and \(\tilde{y}_n\) as an estimate for the local truncation error at the step \(n\)

(11)\[M T_n = \beta \left(y_n - \tilde{y}_n\right) = \beta h_n \sum_{i=1}^{s} \left[ \left(b^E_i - \tilde{b}^E_i\right) f_E(t^E_{n,i}, z_i) + \left(b^I_i - \tilde{b}^I_i\right) f_I(t^I_{n,i}, z_i) \right]\]

for ARK methods, and similarly for ERK methods. Here, \(\beta>0\) is an error bias to help account for the error constant \(D\); the default value of this constant is \(\beta = 1.5\), which may be modified by the user through the functions ARKStepSetErrorBias() and ERKStepSetErrorBias() in C/C++, or through the input ADAPT_BIAS to FARKSETRIN() in Fortran.

With this LTE estimate, the local error test is simply \(\|T_n\| < 1\) since this norm includes the user-specified tolerances. If this error test passes, the step is considered successful, and the estimate is subsequently used to estimate the next step size, as will be described below in the section Asymptotic error control. If the error test fails, the step is rejected and a new step size \(h'\) is then computed using the error control algorithms described in Asymptotic error control. A new attempt at the step is made, and the error test is repeated. If it fails multiple times (as specified through the small_nef input to ARKStepSetSmallNumEFails() and ERKStepSetSmallNumEFails() in C/C++, or the ADAPT_SMALL_NEF argument to FARKSETIIN() in Fortran, which defaults to 2), then \(h'/h\) is limited above to 0.3 (this is modifiable via the etamxf argument to ARKStepSetMaxEFailGrowth() and ERKStepSetMaxEFailGrowth() in C/C++, or the ADAPT_ETAMXF argument to FARKSETRIN() in Fortran), and limited below to 0.1 after an additional step failure. After seven error test failures (modifiable via the functions ARKStepSetMaxErrTestFails() and ERKStepSetMaxErrTestFails() in C/C++, or the MAX_ERRFAIL argument to FARKSETIIN() in Fortran), ARKode returns to the user with a failure message.

We define the step size ratio between a prospective step \(h'\) and a completed step \(h\) as \(\eta\), i.e.

\[\eta = h' / h.\]

This is bounded above by \(\eta_\text{max}\) to ensure that step size adjustments are not overly aggressive. This value is modified according to the step and history,

\[\begin{split}\eta_\text{max} = \begin{cases} \text{etamx1}, & \quad\text{on the first step (default is 10000)}, \\ \text{growth}, & \quad\text{on general steps (default is 20)}, \\ 1, & \quad\text{if the previous step had an error test failure}. \end{cases}\end{split}\]

Here, the value of etamx1 may be modified by the user in the functions ARKStepSetMaxFirstGrowth() and ERKStepSetMaxFirstGrowth() in C/C++, or through the input ADAPT_ETAMX1 to the function FARKSETRIN() in Fortran. Similarly, the growth value may be modified by calls to ARKStepSetMaxGrowth() and ERKStepSetMaxGrowth() in C/C++, or through the input ADAPT_GROWTH to FARKSETRIN() in Fortran.

A flowchart detailing how the time steps are modified at each iteration to ensure solver convergence and successful steps is given in the figure below. Here, all norms correspond to the WRMS norm, and the error adaptivity function arkAdapt is supplied by one of the error control algorithms discussed in the subsections below.

_images/time_adaptivity.png

For some problems it may be preferrable to avoid small step size adjustments. This can be especially true for problems that construct a Newton Jacobian matrix or a preconditioner for a nonlinear or an iterative linear solve, where this construction is computationally expensive, and where convergence can be seriously hindered through use of an inaccurate matrix. In these scenarios, the step is not changed when \(\eta \in [\eta_L, \eta_U]\). The default values for this interval are \(\eta_L = 1\) and \(\eta_U = 1.5\), though these are modifiable through the functions ARKStepSetFixedStepBounds() and ERKStepSetFixedStepBounds() in C/C++, or through the input ADAPT_BOUNDS to the function FARKSETRIN() in Fortran.

We note that any choices for \(\eta\) (or equivalently, \(h'\)) are subsequently constrained by the bounds \(h_\text{min}\) and \(h_\text{max}\) supplied to the main ARKode infrastructure (via the functions ARKodeSetMinStep() and ARKodeSetMaxStep() in C/C++, or through the inputs MIN_STEP and MAX_STEP to the function FARKSETRIN() in Fortran, respectively). Additionally, the top-level ARKode infrastructure may similarly limit \(h'\) to adhere to a user-provided “TSTOP” stopping point, \(t_\text{stop}\) (supplied by a call to ARKodeSetStopTime() in C/C++, or through the input STOP_TIME to FARKSETRIN() in Fortran).

Asymptotic error control

As mentioned above, the time-stepping modules in ARKode adapt the step size in order to attain local errors within desired tolerances of the true solution. These adaptivity algorithms estimate the prospective step size \(h'\) based on the asymptotic local error estimates (10). We define the values \(\varepsilon_n\), \(\varepsilon_{n-1}\) and \(\varepsilon_{n-2}\) as

\[\begin{split}\varepsilon_k &\ \equiv \ \|T_k\| \ = \ \beta \|y_k - \tilde{y}_k\|,\end{split}\]

corresponding to the local error estimates for three consecutive steps, \(t_{n-3} \to t_{n-2} \to t_{n-1} \to t_n\). These local error history values are all initialized to 1 upon program initialization, to accomodate the few initial time steps of a calculation where some of these error estimates have not yet been computed. With these estimates, ARKode implements a variety of error control algorithms, as specified in the subsections below.

PID controller

This is the default time adaptivity controller used by ARKode. It derives from those found in [KC2003], [S1998], [S2003] and [S2006], and uses all three of the local error estimates \(\varepsilon_n\), \(\varepsilon_{n-1}\) and \(\varepsilon_{n-2}\) in determination of a prospective step size,

\[h' \;=\; h_n\; \varepsilon_n^{-k_1/p}\; \varepsilon_{n-1}^{k_2/p}\; \varepsilon_{n-2}^{-k_3/p},\]

where the constants \(k_1\), \(k_2\) and \(k_3\) default to 0.58, 0.21 and 0.1, respectively. These parameters may be changed via a call to the C/C++ functions ARKStepSetAdaptivityMethod() or ERKStepSetAdaptivityMethod() in C/C++, or to the Fortran function FARKSETADAPTIVITYMETHOD() in Fortran. In this estimate, a floor of \(\varepsilon > 10^{-10}\) is enforced to avoid division-by-zero errors.

PI controller

Like with the previous method, the PI controller derives from those found in [KC2003], [S1998], [S2003] and [S2006], but it differs in that it only uses the two most recent step sizes in its adaptivity algorithm,

\[h' \;=\; h_n\; \varepsilon_n^{-k_1/p}\; \varepsilon_{n-1}^{k_2/p}.\]

Here, the default values of \(k_1\) and \(k_2\) default to 0.8 and 0.31, respectively, though they may be changed via a call to either ARKStepSetAdaptivityMethod() or ERKStepSetAdaptivityMethod() in C/C++, or FARKSETADAPTIVITYMETHOD() in Fortran. As with the previous controller, at initialization \(k_1 = k_2 = 1.0\) and the floor of \(10^{-10}\) is enforced on the local error estimates.

I controller

This is the standard time adaptivity control algorithm in use by most available ODE solver codes. It bases the prospective time step estimate entirely off of the current local error estimate,

\[h' \;=\; h_n\; \varepsilon_n^{-k_1/p}.\]

By default, \(k_1=1\), but that may be overridden by the user with the functions ARKStepSetAdaptivityMethod() and ERKStepSetAdaptivityMethod() in C/C++, or the function FARKSETADAPTIVITYMETHOD() in Fortran.

Explicit Gustafsson controller

This step adaptivity algorithm was proposed in [G1991], and is primarily useful in combination with explicit Runge-Kutta methods. Using the notation of our earlier controllers, it has the form

(12)\[\begin{split}h' \;=\; \begin{cases} h_1\; \varepsilon_1^{-1/p}, &\quad\text{on the first step}, \\ h_n\; \varepsilon_n^{-k_1/p}\; \left(\varepsilon_n/\varepsilon_{n-1}\right)^{k_2/p}, & \quad\text{on subsequent steps}. \end{cases}\end{split}\]

The default values of \(k_1\) and \(k_2\) are 0.367 and 0.268, respectively, which may be changed by calling either ARKStepSetAdaptivityMethod() or ERKStepSetAdaptivityMethod() in C/C++, or FARKSETADAPTIVITYMETHOD() in Fortran.

Implicit Gustafsson controller

A version of the above controller suitable for implicit Runge-Kutta methods was introduced in [G1994], and has the form

(13)\[\begin{split}h' = \begin{cases} h_1 \varepsilon_1^{-1/p}, &\quad\text{on the first step}, \\ h_n \left(h_n / h_{n-1}\right) \varepsilon_n^{-k_1/p} \left(\varepsilon_n/\varepsilon_{n-1}\right)^{-k_2/p}, & \quad\text{on subsequent steps}. \end{cases}\end{split}\]

The algorithm parameters default to \(k_1 = 0.98\) and \(k_2 = 0.95\), but may be modified by the user with ARKStepSetAdaptivityMethod() or ERKStepSetAdaptivityMethod() in C/C++, or FARKSETADAPTIVITYMETHOD() in Fortran.

ImEx Gustafsson controller

An ImEx version of these two preceding controllers is also available. This approach computes the estimates \(h'_1\) arising from equation (12) and the estimate \(h'_2\) arising from equation (13), and selects

\[h' = \frac{h}{|h|}\min\left\{|h'_1|, |h'_2|\right\}.\]

Here, equation (12) uses \(k_1\) and \(k_2\) with default values of 0.367 and 0.268, while equation (13) sets both parameters to the input \(k_3\) that defaults to 0.95. All three of these parameters may be modified with the C/C++ functions ARKStepSetAdaptivityMethod() and ERKStepSetAdaptivityMethod() in C/C++, or the Fortran function FARKSETADAPTIVITYMETHOD().

User-supplied controller

Finally, ARKode allows the user to define their own time step adaptivity function,

\[h' = H(y, t, h_n, h_{n-1}, h_{n-2}, \varepsilon_n, \varepsilon_{n-1}, \varepsilon_{n-2}, q, p),\]

via a call to the C/C++ routines ARKStepSetAdaptivityFn() and ERKStepSetAdaptivityFn(), or the Fortran routine FARKADAPTSET().

Explicit stability

For problems that involve a nonzero explicit component, i.e. \(f_E(t,y) \ne 0\) in ARKStep or for any problem in ERKStep, explicit and ImEx Runge-Kutta methods may benefit from addition user-supplied information regarding the explicit stability region. All ARKode adaptivity methods utilize estimates of the local error. It is often the case that such local error control will be sufficient for method stability, since unstable steps will typically exceed the error control tolerances. However, for problems in which \(f_E(t,y)\) includes even moderately stiff components, and especially for higher-order integration methods, it may occur that a significant number of attempted steps will exceed the error tolerances. While these steps will automatically be recomputed, such trial-and-error may be costlier than desired. In these scenarios, a stability-based time step controller may also be useful.

Since the explicit stability region for any method depends on the problem under consideration, in that the extents of the stability region result from the eigenvalues of the linearized operator \(\frac{\partial f_E}{\partial y}\), information on the maximum stable step size is not computed internally within ARKode. However, for many problems such information is readily available. For example, in an advection-diffusion calculation, \(f_I\) may contain the stiff diffusive components and \(f_E\) may contain the comparably nonstiff advection terms. In this scenario, an explicitly stable step \(h_\text{exp}\) would be predicted as one satisfying the Courant-Friedrichs-Lewy (CFL) stability condition,

\[\begin{split}|h_\text{exp}| < \frac{\Delta x}{|\lambda|}\end{split}\]

where \(\Delta x\) is the spatial mesh size and \(\lambda\) is the fastest advective wave speed.

In these scenarios, a user may supply a routine to predict this maximum explicitly stable step size, \(|h_\text{exp}|\), by calling the C/C++ functions ARKStepSetStabilityFn() or ERKStepSetStabilityFn(), or the Fortran function FARKEXPSTABSET(). If a value for \(|h_\text{exp}|\) is supplied, it is compared against the value resulting from the local error controller, \(|h_\text{acc}|\), and the eventual time step used will satisfy

\[h' = \frac{h}{|h|}\min\{c\, |h_\text{exp}|,\, |h_\text{acc}|\}.\]

Here the explicit stability step factor (often called the “CFL factor”) \(c>0\) may be modified through the functions ARKStepSetCFLFraction() and ERKStepSetCFLFraction() in C/C++, or through the input ADAPT_CFL to the function FARKSETRIN() in Fortran, and has a default value of \(1/2\).

Fixed time stepping

While both the ARKStep and ERKStep time-stepping modules in ARKode are designed for time step adaptivity, they additionally support a “fixed-step” mode. This mode is typically used for debugging purposes, for verification against hand-coded Runge-Kutta methods, or for problems where the time steps should be chosen based on other problem-specific information. In this mode, all internal time step adaptivity is disabled:

  • temporal error control is disabled,
  • nonlinear or linear solver non-convergence will result in an error (instead of a step size adjustment),
  • no check against an explicit stability condition is performed.

Additional information on this mode is provided in the section Optional input functions.

Algebraic solvers

Since the ERKStep time-stepping module provides purely explicit numerical methods, the remainder of this section currently pertains only to the ARKStep module. More specifically, when using the ARKStep time-stepping module for a problem involving either a nonzero implicit component, \(f_I(t,y) \ne 0\), or a non-identity mass matrix, \(M \ne I\), systems of linear or nonlinear algebraic equations must be solved at each stage and/or step of the method. This section therefore focuses on the variety of mathematical methods provided in ARKode for such problems, including nonlinear solvers, linear solvers, preconditioners, iterative solver error control, implicit predictors, and techniques used for simplifying the above solves when using non-time-dependent mass-matrices.

Nonlinear solver methods

For both the DIRK and ARK methods corresponding to (1) and (4), an implicit system

(14)\[G(z_i) \equiv M z_i - h_n A^I_{i,i} f_I(t^I_{n,i}, z_i) - a_i = 0\]

must be solved for each stage \(z_i, i=1,\ldots,s\), where we have the data

\[a_i \equiv \left( y_{n-1} + h_n \sum_{j=1}^{i-1} \left[ A^E_{i,j} f_E(t^E_{n,j}, z_j) + A^I_{i,j} f_I(t^I_{n,j}, z_j) \right] \right)\]

for the ARK methods, or

\[a_i \equiv \left( y_{n-1} + h_n \sum_{j=1}^{i-1} A^I_{i,j} f_I(t^I_{n,j}, z_j) \right)\]

for the DIRK methods. Here, if \(f_I(t,y)\) depends nonlinearly on \(y\) then (14) corresponds to a nonlinear system of equations; if \(f_I(t,y)\) depends linearly on \(y\) then this is a linear system of equations.

For systems of either type, ARKStep allows a choice of solution strategy. The default solver choice is a variant of Newton’s method,

(15)\[z_i^{(m+1)} = z_i^{(m)} + \delta^{(m+1)},\]

where \(m\) is the Newton iteration index, and the Newton update \(\delta^{(m+1)}\) in turn requires the solution of the Newton linear system

(16)\[{\mathcal A}\left(t^I_{n,i}, z_i^{(m)}\right)\, \delta^{(m+1)} = -G\left(z_i^{(m)}\right),\]

in which

(17)\[{\mathcal A}(t,z) \approx M - \gamma J(t,z), \quad J(t,z) = \frac{\partial f_I(t,z)}{\partial z}, \quad\text{and}\quad \gamma = h_n A^I_{i,i}.\]

When the problem involves an identity mass matrix, then as an alternate to Newton’s method, ARKStep may instead solve for each stage \(z_i, i=1,\ldots,s\) using a fixed point iteration

(18)\[z_i^{(m+1)} = \Phi\left(z_i^{(m)}\right) \equiv z_i^{(m)} - G\left(z_i^{(m)}\right), \quad m=0,1,\ldots\]

This iteration may additionally be improved using a technique called “Anderson acceleration” [WN2011]. Unlike with Newton’s method, these methods do not require the solution of a linear system at each iteration, instead opting for solution of a low-dimensional least-squares solution to construct the nonlinear update.

Finally, if the user specifies that \(f_I(t,y)\) depends linearly on \(y\) (via a call to ARKStepSetLinear() in C/C++, or the LINEAR argument to FARKSETIIN() in Fortran), and if the Newton-based nonlinear solver is chosen, then the problem (14) will be solved using only a single Newton iteration. In this case, an additional argument to either ARKStepSetLinear() or FARKSETIIN() must be supplied to indicate whether this Jacobian is time-dependent or not, indicating whether the Jacobian or preconditioner needs to be recomputed at each stage or time step.

The optimal choice of solver (Newton vs fixed-point) is highly problem-dependent. Since fixed-point solvers do not require the solution of any linear systems, each iteration may be significantly less costly than their Newton counterparts. However, this can come at the cost of slower convergence (or even divergence) in comparison with Newton-like methods. On the other hand, these fixed-point solvers do allow for user specification of the Anderson-accelerated subspace size, \(m_k\). While the required amount of solver memory for acceleration grows proportionately to \(m_k N\), larger values of \(m_k\) may result in faster convergence. In our experience, this improvement is most significant for “small” values, e.g. \(1\le m_k\le 5\), and that larger values of \(m_k\) may not result in improved convergence.

While ARKStep uses a Newton-based iteration as its default solver due to its increased robustness on very stiff problems, it is strongly recommended that users also consider the fixed-point solver when attempting a new problem.

For either the Newton or fixed-point solvers, it is well-known that both the efficiency and robustness of the algorithm intimately depend on the choice of a good initial guess. In ARKStep, the initial guess for these solvers is a prediction \(z_i^{(0)}\) that is computed explicitly from previously-computed data (e.g. \(y_{n-2}\), \(y_{n-1}\), and \(z_j\) where \(j<i\)). Additional information on the specific ARKStep predictor algorithms is provided in the following section, Implicit predictors.

Linear solver methods

When a Newton-based method is chosen for solving each nonlinear system, a linear system of equations must be solved at each nonlinear iteration. For this solve ARKode provides several choices, including the option of a user-supplied linear solver module. The linear solver modules distributed with SUNDIALS are organized into two families: a direct family comprising direct linear solvers for dense, banded or sparse matrices, and a spils family comprising scaled, preconditioned, iterative (Krylov) linear solvers. The methods offered through these modules are as follows:

  • dense direct solvers, using either an internal SUNDIALS implementation or a BLAS/LAPACK implementation (serial version only),
  • band direct solvers, using either an internal SUNDIALS implementation or a BLAS/LAPACK implementation (serial version only),
  • sparse direct solvers, using either the KLU sparse matrix library [KLU], or the OpenMP or PThreads-enabled SuperLU_MT sparse matrix library [SuperLUMT] [Note that users will need to download and install the KLU or SuperLU_MT packages independent of ARKode],
  • SPGMR, a scaled, preconditioned GMRES (Generalized Minimal Residual) solver,
  • SPFGMR, a scaled, preconditioned Flexible GMRES (Generalized Minimal Residual) solver,
  • SPBCGS, a scaled, preconditioned Bi-CGStab (Bi-Conjugate Gradient Stable) solver,
  • SPTFQMR, a scaled, preconditioned TFQMR (Transpose-free Quasi-Minimal Residual) solver, or
  • PCG, a preconditioned CG (Conjugate Gradient method) solver for symmetric linear systems.

For large stiff systems where direct methods are infeasible, the combination of an implicit integrator and a preconditioned Krylov method can yield a powerful tool because it combines established methods for stiff integration, nonlinear solver iteration, and Krylov (linear) iteration with a problem-specific treatment of the dominant sources of stiffness, in the form of a user-supplied preconditioner matrix [BH1989]. We note that the direct linear solver interfaces provided by SUNDIALS are only designed to be used with the serial and threaded vector representations.

Direct linear solvers

In the case that a direct linear solver is used, ARKStep utilizes either a Newton or a modified Newton iteration. The difference between these is that in a modified Newton method, the matrix \({\mathcal A}\) is held fixed for multiple Newton iterations. More precisely, each Newton iteration is computed from the modified equation

(19)\[\tilde{\mathcal A}\left(\tilde{t},\tilde{z}\right)\, \delta^{(m+1)} = -G\left(z_i^{(m)}\right),\]

in which

(20)\[\tilde{\mathcal A}(t,z) \approx M - \tilde{\gamma} J(t,z), \quad\text{and}\quad \tilde{\gamma} = \tilde{h} A^I_{i,i}.\]

Here, the solution \(\tilde{z}\), time \(\tilde{t}\), and step size \(\tilde{h}\) upon which the modified equation rely, are merely values of these quantities from a previous iteration. In other words, the matrix \(\tilde{\mathcal A}\) is only computed rarely, and reused for repeated solves. The frequency at which \(\tilde{\mathcal A}\) is recomputed, and hence the choice between normal and modified Newton iterations, is determined by the input parameter msbp to the input function ARKStepSetMaxStepsBetweenLSet() in C/C++, or with the LSETUP_MSBP argument to FARKSETIIN() in Fortran.

When using the direct and band solvers for the linear systems (19), the Jacobian \(J\) may be supplied by a user routine, or approximated internally by finite-differences. In the case of differencing, we use the standard approximation

\[J_{i,j}(t,z) \approx \frac{f_{I,i}(t,z+\sigma_j e_j) - f_{I,i}(t,z)}{\sigma_j},\]

where \(e_j\) is the jth unit vector, and the increments \(\sigma_j\) are given by

\[\sigma_j = \max\left\{ \sqrt{U}\, |z_j|, \frac{\sigma_0}{w_j} \right\}.\]

Here \(U\) is the unit roundoff, \(\sigma_0\) is a small dimensionless value, and \(w_j\) is the error weight defined in (8). In the dense case, this approach requires \(N\) evaluations of \(f_I\), one for each column of \(J\). In the band case, the columns of \(J\) are computed in groups, using the Curtis-Powell-Reid algorithm, with the number of \(f_I\) evaluations equal to the matrix bandwidth.

We note that with the sparse direct solvers, the Jacobian must be supplied by a user routine.

Iterative linear solvers

In the case that an iterative linear solver is chosen, ARKStep utilizes a Newton method variant called an Inexact Newton iteration. Here, the matrix \({\mathcal A}\) is not itself constructed since the algorithms only require the product of this matrix with a given vector. Additionally, each Newton system (16) is not solved completely, since these linear solvers are iterative (hence the “inexact” in the name). As a result. for these linear solvers \({\mathcal A}\) is applied in a matrix-free manner,

\[{\mathcal A}(t,z)\, v = Mv - \gamma\, J(t,z)\, v.\]

The matrix-vector products \(Mv\) must be provided through a user-supplied routine; the matrix-vector products \(Jv\) are obtained by either calling an optional user-supplied routine, or through a finite difference approximation to the directional derivative:

\[J(t,z)\,v \approx \frac{f_I(t,z+\sigma v) - f_I(t,z)}{\sigma},\]

where the increment \(\sigma = 1/\|v\|\) to ensure that \(\|\sigma v\| = 1\).

As with the modified Newton method that reused \({\mathcal A}\) between solves, ARKStep’s inexact Newton iteration may also recompute the preconditioner \(P\) infrequently to balance the high costs of matrix construction and factorization against the reduced convergence rate that may result from a stale preconditioner.

Updating the linear solver

In cases where recomputation of the Newton matrix \(\tilde{\mathcal A}\) or preconditioner \(P\) is lagged, ARKStep will force recomputation of these structures only in the following circumstances:

  • when starting the problem,
  • when more than 20 steps have been taken since the last update (this value may be changed via the msbp argument to ARKStepSetMaxStepsBetweenLSet() in C/C++, or the LSETUP_MSBP argument to FARKSETIIN() in Fortran),
  • when the value \(\bar{\gamma}\) of \(\gamma\) at the last update satisfies \(\left|\gamma/\bar{\gamma} - 1\right| > 0.2\) (this tolerance may be changed via the dgmax argument to ARKStepSetDeltaGammaMax() in C/C++, or the LSETUP_DGMAX argument to FARKSETRIN() in Fortran),
  • when a non-fatal convergence failure just occurred,
  • when an error test failure just occurred, or
  • if the problem is linearly implicit and \(\gamma\) has changed by a factor larger than 100 times machine epsilon.

When an update is forced due to a convergence failure, an update of \(\tilde{\mathcal A}\) or \(P\) may or may not involve a reevaluation of \(J\) (in \(\tilde{\mathcal A}\)) or of Jacobian data (in \(P\)), depending on whether errors in the Jacobian were the likely cause of the failure. More generally, the decision is made to re-evaluate \(J\) (or instruct the user to update \(P\)) when:

  • starting the problem,
  • more than 50 steps have been taken since the last evaluation,
  • a convergence failure occurred with an outdated matrix, and the value \(\bar{\gamma}\) of \(\gamma\) at the last update satisfies \(\left|\gamma/\bar{\gamma} - 1\right| > 0.2\),
  • a convergence failure occurred that forced a step size reduction, or
  • if the problem is linearly implicit and \(\gamma\) has changed by a factor larger than 100 times machine epsilon.

However, for direct linear solvers and preconditioners that do not rely on costly matrix construction and factorization operations (e.g. when using an iterative multigrid method as preconditioner), it may be more efficient to update these structures more freqeuently than the above heuristics specify, since the increased rate of linear/nonlinear solver convergence may more than account for the additional cost of Jacobian/preconditioner construction. To this end, a user may specify that the system matrix \({\mathcal A}\) and/or preconditioner \(P\) should be recomputed at every Newton iteration by supplying a negative value for the msbp argument to ARKStepSetMaxStepsBetweenLSet() in C/C++, or the LSETUP_MSBP argument to FARKSETIIN() in Fortran.

As will be further discussed in the section Preconditioning, in the case of most Krylov methods, preconditioning may be applied on the left, right, or on both sides of \({\mathcal A}\), with user-supplied routines for the preconditioner setup and solve operations.

Iteration Error Control

Nonlinear iteration error control

The stopping test for all of the nonlinear solver algorithms in the ARKStep time-stepping module is related to the temporal local error test, with the goal of keeping the nonlinear iteration errors from interfering with local error control. Denoting the final computed value of each stage solution as \(z_i^{(m)}\), and the true stage solution solving (14) as \(z_i\), we want to ensure that the iteration error \(z_i - z_i^{(m)}\) is “small” (recall that a norm less than 1 is already considered within an acceptable tolerance).

To this end, we first estimate the linear convergence rate \(R_i\) of the nonlinear iteration. We initialize \(R_i=1\), and reset it to this value whenever \(\tilde{\mathcal A}\) or \(P\) are updated. After computing a nonlinear correction \(\delta^{(m)} = z_i^{(m)} - z_i^{(m-1)}\), if \(m>0\) we update \(R_i\) as

\[R_i \leftarrow \max\{ 0.3 R_i, \left\|\delta^{(m)}\right\| / \left\|\delta^{(m-1)}\right\| \}.\]

where the factor 0.3 is user-modifiable as the crdown input to the the function ARKStepSetNonlinCRDown() in C/C++, or the NONLIN_CRDOWN argument to FARKSETRIN() in Fortran.

Let \(y_n^{(m)}\) denote the time-evolved solution constructed using our approximate nonlinear stage solutions, \(z_i^{(m)}\), and let \(y_n^{(\infty)}\) denote the time-evolved solution constructed using exact nonlinear stage solutions. We then use the estimate

\[\left\| y_n^{(\infty)} - y_n^{(m)} \right\| \approx \max_i \left\| z_i^{(m+1)} - z_i^{(m)} \right\| \approx \max_i R_i \left\| z_i^{(m)} - z_i^{(m-1)} \right\| = \max_i R_i \left\| \delta^{(m)} \right\|.\]

Therefore our convergence (stopping) test for the nonlinear iteration for each stage is

(21)\[\begin{split}R_i \left\|\delta^{(m)} \right\| < \epsilon,\end{split}\]

where the factor \(\epsilon\) has default value 0.1, and is user-modifiable as the nlscoef input to the the function ARKStepSetNonlinConvCoef() in C/C++, or the NLCONV_COEF input to the function FARKSETRIN() in Fortran. We allow up to 3 nonlinear iterations (modifiable through ARKStepSetMaxNonlinIters() in C/C++, or as the MAX_NSTEPS argument to FARKSETIIN() in Fortran). We also declare the nonlinear iteration to be divergent if any of the ratios \(\|\delta^{(m)}\| / \|\delta^{(m-1)}\| > 2.3\) with \(m>0\) (the value 2.3 may be modified as the rdiv input to ARKStepSetNonlinRDiv() in C/C++, or the NONLIN_RDIV input to FARKSETRIN() in Fortran). If convergence fails in the fixed point iteration, or in the Newton iteration with \(J\) or \({\mathcal A}\) current, we reduce the step size \(h_n\) by a factor of 0.25 (modifiable via the etacf input to the ARKStepSetMaxCFailGrowth() function in C/C++, or the ADAPT_ETACF input to FARKSETRIN() in Fortran). The integration will be halted after 10 convergence failures (modifiable via the ARKStepSetMaxConvFails() function in C/C++, or the MAX_CONVFAIL argument to FARKSETIIN() in Fortran), or if a convergence failure occurs with \(h_n = h_\text{min}\).

Linear iteration error control

When a Krylov method is used to solve the linear Newton systems (16), its errors must also be controlled. To this end, we approximate the linear iteration error in the solution vector \(\delta^{(m)}\) using the preconditioned residual vector, e.g. \(r = P{\mathcal A}\delta^{(m)} + PG\) for the case of left preconditioning (the role of the preconditioner is further elaborated in the next section). In an attempt to ensure that the linear iteration errors do not interfere with the nonlinear solution error and local time integration error controls, we require that the norm of the preconditioned linear residual satisfies

(22)\[\|r\| \le \frac{\epsilon_L \epsilon}{10}.\]

Here \(\epsilon\) is the same value as that used above for the nonlinear error control. The factor of 10 is used to ensure that the linear solver error does not adversely affect the nonlinear solver convergence. Smaller values for the parameter \(\epsilon_L\) are typically useful for strongly nonlinear or very stiff ODE systems, while easier ODE systems may benefit from a value closer to 1. The default value is \(\epsilon_L = 0.05\), which may be modified through the ARKSpilsSetEpsLin() function in C/C++, or through the FARKSPILSSETEPSLIN() in Fortran. We note that for linearly implicit problems the tolerance (22) is similarly used for the single Newton iteration.

Preconditioning

When using an inexact Newton method to solve the nonlinear system (14), ARKStep makes repeated use of an iterative method to solve linear systems of the form \({\mathcal A}x = b\), where \(x\) is a correction vector and \(b\) is a residual vector. If this iterative method is one of the scaled preconditioned iterative linear solvers supplied with ARKode, their efficiency may benefit tremendously from preconditioning. A system \({\mathcal A}x=b\) can be preconditioned using any one of:

\[\begin{split}(P^{-1}{\mathcal A})x = P^{-1}b & \qquad\text{[left preconditioning]}, \\ ({\mathcal A}P^{-1})Px = b & \qquad\text{[right preconditioning]}, \\ (P_L^{-1} {\mathcal A} P_R^{-1}) P_R x = P_L^{-1}b & \qquad\text{[left and right preconditioning]}.\end{split}\]

These Krylov iterative methods are then applied to a system with the matrix \(P^{-1}{\mathcal A}\), \({\mathcal A}P^{-1}\), or \(P_L^{-1} {\mathcal A} P_R^{-1}\), instead of \({\mathcal A}\). In order to improve the convergence of the Krylov iteration, the preconditioner matrix \(P\), or the product \(P_L P_R\) in the third case, should in some sense approximate the system matrix \({\mathcal A}\). Simultaneously, in order to be cost-effective the matrix \(P\) (or matrices \(P_L\) and \(P_R\)) should be reasonably efficient to evaluate and solve. Finding an optimal point in this tradeoff between rapid convergence and low cost can be quite challenging. Good choices are often problem-dependent (for example, see [BH1989] for an extensive study of preconditioners for reaction-transport systems).

Most of the iterative linear solvers supplied with SUNDIALS allow for all three types of preconditioning (left, right or both), although for non-symmetric matrices \({\mathcal A}\) we know of few situations where preconditioning on both sides is superior to preconditioning on one side only (with the product \(P = P_L P_R\)). Moreover, for a given preconditioner matrix, the merits of left vs. right preconditioning are unclear in general, so we recommend that the user experiment with both choices. Performance can differ between these since the inverse of the left preconditioner is included in the linear system residual whose norm is being tested in the Krylov algorithm. As a rule, however, if the preconditioner is the product of two matrices, we recommend that preconditioning be done either on the left only or the right only, rather than using one factor on each side. An exception to this rule is the PCG solver, that itself assumes a symmetric matrix \({\mathcal A}\), since the PCG algorithm in fact applies the single preconditioner matrix \(P\) in both left/right fashion as \(P^{-1/2} {\mathcal A} P^{-1/2}\).

Typical preconditioners used with ARKStep are based on approximations to the system Jacobian, \(J = \partial f_I / \partial y\). Since the Newton iteration matrix involved is \({\mathcal A} = M - \gamma J\), any approximation \(\bar{J}\) to \(J\) yields a matrix that is of potential use as a preconditioner, namely \(P = M - \gamma \bar{J}\). Because the Krylov iteration occurs within a Newton iteration and further also within a time integration, and since each of these iterations has its own test for convergence, the preconditioner may use a very crude approximation, as long as it captures the dominant numerical feature(s) of the system. We have found that the combination of a preconditioner with the Newton-Krylov iteration, using even a relatively poor approximation to the Jacobian, can be surprisingly superior to using the same matrix without Krylov acceleration (i.e., a modified Newton iteration), as well as to using the Newton-Krylov method with no preconditioning.

Implicit predictors

For problems with implicit components, ARKStep will employ a prediction algorithm for constructing the initial guesses for each implicit Runge-Kutta stage, \(z_i^{(0)}\). As is well-known with nonlinear solvers, the selection of a good initial guess can have dramatic effects on both the speed and robustness of the solve, enabling the difference between rapid quadratic convergence versus divergence of the iteration. To this end, ARKStep implements a variety of prediction algorithms that may be selected by the user. In each case, the stage guesses \(z_i^{(0)}\) are constructed explicitly using readily-available information, including the previous step solutions \(y_{n-1}\) and \(y_{n-2}\), as well as any previous stage solutions \(z_j, \quad j<i\). In most cases, prediction is performed by constructing an interpolating polynomial through existing data, which is then evaluated at the desired stage time to provide an inexpensive but (hopefully) reasonable prediction of the stage solution. Specifically, for most Runge-Kutta methods each stage solution satisfies

\[z_i \approx y(t^I_{n,i}),\]

so by constructing an interpolating polynomial \(p_q(t)\) through a set of existing data, the initial guess at stage solutions may be approximated as

(23)\[z_i^{(0)} = p_q(t^I_{n,i}).\]

As the stage times for implicit ARK and DIRK stages usually satisfy \(c_j^I > 0\), it is typically the case that \(t^I_{n,j}\) is outside of the time interval containing the data used to construct \(p_q(t)\), hence (23) will correspond to an extrapolant instead of an interpolant. The dangers of using a polynomial interpolant to extrapolate values outside the interpolation interval are well-known, with higher-order polynomials and predictions further outside the interval resulting in the greatest potential inaccuracies.

The prediction algorithms available in ARKStep therefore construct a variety of interpolants \(p_q(t)\), having different polynomial order and using different interpolation data, to support ‘optimal’ choices for different types of problems, as described below.

Trivial predictor

The so-called “trivial predictor” is given by the formula

\[p_0(t) = y_{n-1}.\]

While this piecewise-constant interpolant is clearly not a highly accurate candidate for problems with time-varying solutions, it is often the most robust approach for highly stiff problems, or for problems with implicit constraints whose violation may cause illegal solution values (e.g. a negative density or temperature).

Maximum order predictor

At the opposite end of the spectrum, ARKStep can utilize ARKode’s interpolation module Interpolation to construct a higher-order polynomial interpolant, \(p_q(t)\), based on the two most-recently-computed solutions, \(\left\{ y_{n-2}, f_{n-2}, y_{n-1}, f_{n-1} \right\}\). ARKStep can then utilize \(p_q(t)\) to extrapolate predicted stage solutions for each stage times \(t^I_{n,i}\). This polynomial order is the same as that specified by the user for dense output, via the functions ARKodeSetDenseOrder() in C/C++ or FARKSETIIN() in Fortran (via the DENSE_ORDER argument).

Variable order predictor

This predictor attempts to use higher-order polynomials \(p_q(t)\) for predicting earlier stages, and lower-order interpolants for later stages. It uses the same interpolation module as described above, but chooses \(q\) adaptively based on the stage index \(i\), under the (rather tenuous) assumption that the stage times are increasing, i.e. \(c^I_j < c^I_k\) for \(j<k\):

\[q = \max\{ q_\text{max} - i,\; 1 \}.\]

Cutoff order predictor

This predictor follows a similar idea as the previous algorithm, but monitors the actual stage times to determine the polynomial interpolant to use for prediction. Denoting \(\tau = c_i^I \frac{h_n}{h_{n-1}}\), the polynomial degree \(q\) is chosen as:

\[\begin{split}q = \begin{cases} q_\text{max}, & \text{if}\quad \tau < \tfrac12,\\ 1, & \text{otherwise}. \end{cases}\end{split}\]

Bootstrap predictor

This predictor does not use any information from the preceding step, instead using information only within the current step \([t_{n-1},t_n]\). In addition to using the solution and ODE right-hand side function, \(y_{n-1}\) and \(f_{n-1}=\left[f_E(t_{n-1},y_{n-1}) + f_I(t_{n-1},y_{n-1})\right]\), this approach uses the right-hand side from a previously computed stage solution in the same step, \(f(t_{n-1}+c^I_j h,z_j)\) to construct a quadratic Hermite interpolant for the prediction. If we define the constants \(\tilde{h} = c^I_j h\) and \(\tau = c^I_i h\), the predictor is given by

\[z_i^{(0)} = y_{n-1} + \left(\tau - \frac{\tau^2}{2\tilde{h}}\right) f(t_{n-1},y_{n-1}) + \frac{\tau^2}{2\tilde{h}} f(t_{n-1}+\tilde{h},z_j).\]

For stages without a nonzero preceding stage time \(c^I_j\ne 0\) for \(j<i\), this method reduces to using the trivial predictor \(z_i^{(0)} = y_{n-1}\). For stages having multiple precdeding nonzero \(c^I_j\), we choose the stage having largest \(c^I_j\) value, to minimize the level of extrapolation used in the prediction.

We note that in general, each stage solution \(z_j\) has signicantly worse accuracy than the time step solutions \(y_{n-1}\), due to the difference between the stage order and the method order in Runge-Kutta methods. As a result, the accuracy of this predictor will generally be rather limited, but it is provided for problems in which this increased stage error is better than the effects of extrapolation far outside of the previous time step interval \([t_{n-2},t_{n-1}]\).

We further note that although this method could be used with non-identity mass matrix \(M\ne I\), support for that mode is not currently implemented, so selection of this predictor in the case that \(M\ne I\) will result in use of the Trivial predictor.

Minimum correction predictor

The last ARKStep predictor is not interpolation based; instead it utilizes all existing stage information from the current step to create a predictor containing all but the current stage solution. Specifically, as discussed in equations (2) and (14), each stage solves a nonlinear equation

\[\begin{split}z_i &= y_{n-1} + h_n \sum_{j=1}^{i-1} A^E_{i,j} f_E(t^E_{n,j}, z_j) + h_n \sum_{j=1}^{i} A^I_{i,j} f_I(t^I_{n,j}, z_j), \\ \Leftrightarrow \qquad & \\ G(z_i) &\equiv z_i - h_n A^I_{i,i} f_I(t^I_{n,i}, z_i) - a_i = 0.\end{split}\]

This prediction method merely computes the predictor \(z_i\) as

\[\begin{split}z_i &= y_{n-1} + h_n \sum_{j=1}^{i-1} A^E_{i,j} f_E(t^E_{n,j}, z_j) + h_n \sum_{j=1}^{i-1} A^I_{i,j} f_I(t^I_{n,j}, z_j), \\ \Leftrightarrow \quad & \\ z_i &= a_i.\end{split}\]

We again note that although this method could be used with non-identity mass matrix \(M\ne I\), support for that mode is not currently implemented, so selection of this predictor in the case that \(M\ne I\) will result in use of the Trivial predictor.

Mass matrix solver

Within the algorithms described above, there are multiple locations where a matrix-vector product

(24)\[b = M x\]

or a linear solve

(25)\[x = M^{-1} b\]

are required.

Of course, for problems in which \(M=I\) neither of these routines are required. However for problems with non-identity \(M\), ARKStep may handle these linear solves (25) using either an iterative linear solver or a direct linear solver, in the same manner as described in the section Linear solver methods for solving the linear Newton systems.

At present, for DIRK and ARK problems using a direct solver for the Newton nonlinear iterations, the type of matrix (dense, band or sparse) for the Newton systems \(\tilde{\mathcal A}\delta = -G\) must match the type of linear solver used for these mass-matrix systems, since \(M\) is included inside \(\tilde{\mathcal A}\). When direct methods are employed, the user must supply a routine to compute \(M\) in either dense, band or sparse form to match the structure of \({\mathcal A}\), with a user-supplied routine of type ARKDlsMassFn(). This matrix structure is used internally to perform any requisite mass matrix-vector products (24).

When iterative methods are selected, a routine must be supplied to perform the mass-matrix-vector product, \(Mv\), through a call to the routine ARKSpilsMassTimesVecFn(). As with iterative solvers for the Newton systems, preconditioning may be applied to aid in solution of the mass matrix systems \(Mx=b\). When using an iterative mass matrix linear solver, we require that the norm of the preconditioned linear residual satisfies

(26)\[\|r\| \le \epsilon_L \epsilon,\]

where again, \(\epsilon\) is the nonlinear solver tolerance parameter from (21). When using iterative system and mass matrix linear solvers, \(\epsilon_L\) may be specified separately for both tolerances (22) and (26); the mass matrix linear solver value of \(\epsilon_L\) may be modified using ARKSpilsSetMassEpsLin() in C/C++, or FARKSPILSSETMASSEPSLIN() in Fortran.

In the above algorithmic description there are three locations where a linear solve of the form (25) is required: (a) in constructing the time-evolved solution \(y_n\), (b) in estimating the local temporal truncation error, and (c) in constructing predictors for the implicit solver iteration (see section Maximum order predictor). Specifically, to construct the time-evolved solution \(y_n\) from equation (2) we must solve

\[\begin{split}&M y_n \ = \ M y_{n-1} + h_n \sum_{i=1}^{s} \left( b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right), \\ \Leftrightarrow \qquad & \\ &M (y_n -y_{n-1}) \ = \ h_n \sum_{i=1}^{s} \left(b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right), \\ \Leftrightarrow \qquad & \\ &M \nu \ = \ h_n \sum_{i=1}^{s} \left(b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right),\end{split}\]

for the update \(\nu = y_n - y_{n-1}\). For construction of the stages \(z_i\) this requires no mass matrix solves (as these are included in the nonlinear system solve). Similarly, in computing the local temporal error estimate \(T_n\) from equation (11) we must solve systems of the form

\[M\, T_n = h \sum_{i=1}^{s} \left[ \left(b^E_i - \tilde{b}^E_i\right) f_E(t^E_{n,i}, z_i) + \left(b^I_i - \tilde{b}^I_i\right) f_I(t^I_{n,i}, z_i) \right].\]

Lastly, in constructing dense output and implicit predictors of order 2 or higher (as in the section Maximum order predictor above), we must compute the derivative information \(f_k\) from the equation

\[M f_k = f_E(t_k, y_k) + f_I(t_k, y_k).\]

In total, these require only two mass-matrix linear solves (25) per attempted time step, with one more upon completion of a time step that meets the solution accuracy requirements.