Mathematical Considerations

ARKode solves ODE initial value problems (IVP) in \(\mathbb{R}^N\) posed in linearly-implicit form,

(1)\[M \dot{y} = f(t,y), \qquad y(t_0) = y_0.\]

Here, \(t\) is the independent variable (e.g. time), and the dependent variables are given by \(y \in \mathbb{R}^N\), where we use the notation \(\dot{y}\) to denote \(\frac{dy}{dt}\).

\(M\) is a user-specified nonsingular operator from \(\mathbb{R}^N \to \mathbb{R}^N\). This operator is currently assumed to be independent of both \(t\) and \(y\). For standard systems of ordinary differential equations and for problems arising from the spatial semi-discretization of partial differential equations using finite difference, finite volume, or spectral finite element methods, \(M\) is typically the identity matrix, \(I\). For PDEs using standard finite-element spatial semi-discretizations, \(M\) is typically a well-conditioned mass matrix that is fixed throughout a simulation (except in the case of a spatially-adaptive method, where \(M\) can change between, but not within, time steps).

The ODE right-hand side is given by the function \(f(t,y)\), i.e. in general we make no assumption that the problem (1) is autonomous (\(f=f(y)\)). In general, the time integration methods within ARKode support additive splittings of this right-hand side function, as described in the subsections that follow. Through these splittings, the time-stepping methods currently supplied with ARKode are designed to solve stiff, nonstiff, or mixed stiff/nonstiff problems. Roughly speaking, stiffness is characterized by the presence of at least one rapidly damped mode, whose time constant is small compared to the time scale of the solution itself.

In the sub-sections that follow, we elaborate on the numerical methods utilized in ARKode. We first discuss the “single-step” nature of the ARKode infrastructure, including its usage modes and approaches for interpolated solution output. We then discuss the current suite of time-stepping modules supplied with ARKode, including the ARKStep module for additive Runge-Kutta methods and the ERKStep module that is optimized for explicit Runge-Kutta methods. We then discuss the adaptive temporal error controllers shared by the time-stepping modules, including discussion of our choice of norms for measuring errors within various components of the solver.

We then discuss the nonlinear and linear solver strategies used by ARKode’s time-stepping modules for solving implicit algebraic systems that arise in computing each stage and/or step: nonlinear solvers, linear solvers, preconditioners, error control within iterative nonlinear and linear solvers, algorithms for initial predictors for implicit stage solutions, and approaches for handling non-identity mass-matrices.

We conclude with a section describing ARKode’s rootfinding capabilities, that may be used to stop integration of a problem prematurely based on traversal of roots in user-specified functions.

Adaptive single-step methods

The ARKode infrastructure is designed to support single-step, IVP integration methods, i.e.

\[y_{n} = \varphi(y_{n-1}, h_n)\]

where \(y_{n-1}\) is an approximation to the solution \(y(t_{n-1})\), \(y_{n}\) is an approximation to the solution \(y(t_n)\), \(t_n = t_{n-1} + h_n\), and the approximation method is represented by the function \(\varphi\).

The choice of step size \(h_n\) is determined by the time-stepping method (based on user-provided inputs, typically accuracy requirements). However, users may place minimum/maximum bounds on \(h_n\) if desired.

ARKode’s time stepping modules may be run in a variety of “modes”:

  • NORMAL – The solver will take internal steps until it has just overtaken a user-specified output time, \(t_\text{out}\), in the direction of integration, i.e. \(t_{n-1} < t_\text{out} \le t_{n}\) for forward integration, or \(t_{n} \le t_\text{out} < t_{n-1}\) for backward integration. It will then compute an approximation to the solution \(y(t_\text{out})\) by interpolation (using one of the dense output routines described in the section Interpolation).
  • ONE-STEP – The solver will only take a single internal step \(y_{n-1} \to y_{n}\) and then return control back to the calling program. If this step will overtake \(t_\text{out}\) then the solver will again return an interpolated result; otherwise it will return a copy of the internal solution \(y_{n}\).
  • NORMAL-TSTOP – The solver will take internal steps until the next step will overtake \(t_\text{out}\). It will then limit this next step so that \(t_n = t_{n-1} + h_n = t_\text{out}\), and once the step completes it will return a copy of the internal solution \(y_{n}\).
  • ONE-STEP-TSTOP – The solver will check whether the next step will overtake \(t_\text{out}\) – if not then this mode is identical to “one-step” above; otherwise it will limit this next step so that \(t_n = t_{n-1} + h_n = t_\text{out}\). In either case, once the step completes it will return a copy of the internal solution \(y_{n}\).

We note that interpolated solutions may be slightly less accurate than the internal solutions produced by the solver. Hence, to ensure that the returned value has full method accuracy one of the “tstop” modes may be used.

Interpolation

As mentioned above, the time-stepping modules in ARKode support interpolation of solutions \(y(t_\text{out})\) where \(t_\text{out}\) occurs within a completed time step from \(t_{n-1} \to t_n\). Additionally, this module supports extrapolation of solutions to \(t\) outside this interval (e.g. to construct predictors for iterative nonlinear and linear solvers). To this end, ARKode currently supports construction of polynomial interpolants \(p_q(t)\) of polynomial order up to \(q=5\), although this polynomial order may be adjusted by the user.

These interpolants are either of Lagrange or Hermite form, and use the data \(\left\{ y_{n-1}, f_{n-1}, y_{n}, f_{n} \right\}\), where here we use the simplified notation \(f_{k}\) to denote \(f(t_k,y_k)\). Defining a normalized “time” variable, \(\tau\), for the most-recently-computed solution interval \(t_{n-1} \to t_{n}\) as

\[\tau(t) = \frac{t-t_{n-1}}{h_{n}},\]

we then construct the interpolants \(p_q(t)\) as follows:

  • \(q=0\): constant interpolant

    \[p_0(\tau) = \frac{y_{n-1} + y_{n}}{2}.\]
  • \(q=1\): linear Lagrange interpolant

    \[p_1(\tau) = -\tau\, y_{n-1} + (1+\tau)\, y_{n}.\]
  • \(q=2\): quadratic Hermite interpolant

    \[p_2(\tau) = \tau^2\,y_{n-1} + (1-\tau^2)\,y_{n} + h(\tau+\tau^2)\,f_{n}.\]
  • \(q=3\): cubic Hermite interpolant

    \[p_3(\tau) = (3\tau^2 + 2\tau^3)\,y_{n-1} + (1-3\tau^2-2\tau^3)\,y_{n} + h(\tau^2+\tau^3)\,f_{n-1} + h(\tau+2\tau^2+\tau^3)\,f_{n}.\]

We note that although interpolants of order \(> 5\) are possible, these are not currently implemented due to their increased computing and storage costs. However, these may be added in future releases.

ARKStep – Additive Runge-Kutta methods

The ARKStep time-stepping module in ARKode is designed for IVP of the form

(2)\[M \dot{y} = f_E(t,y) + f_I(t,y), \qquad y(t_0) = y_0,\]

i.e. the right-hand side function is additively split into two components:

  • \(f_E(t,y)\) contains the “nonstiff” components of the system. This will be integrated using an explicit method.
  • \(f_I(t,y)\) contains the “stiff” components of the system. This will be integrated using an implicit method.

In solving the IVP (2), ARKStep utilizes variable-step, embedded, additive Runge-Kutta methods (ARK), corresponding to algorithms of the form

(3)\[\begin{split}M z_i &= M y_{n-1} + h_n \sum_{j=1}^{i-1} A^E_{i,j} f_E(t^E_{n,j}, z_j) + h_n \sum_{j=1}^{i} A^I_{i,j} f_I(t^I_{n,j}, z_j), \quad i=1,\ldots,s, \\ M y_n &= M y_{n-1} + h_n \sum_{i=1}^{s} \left(b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right), \\ M \tilde{y}_n &= M y_{n-1} + h_n \sum_{i=1}^{s} \left( \tilde{b}^E_i f_E(t^E_{n,i}, z_i) + \tilde{b}^I_i f_I(t^I_{n,i}, z_i)\right).\end{split}\]

Here \(\tilde{y}_n\) are embedded solutions that approximate \(y(t_n)\) that are used for error estimation; these typically have slightly lower accuracy than the computed solutions \(y_n\). The internal stage times are abbreviated using the notation \(t^E_{n,j} = t_{n-1} + c^E_j h_n\) and \(t^I_{n,j} = t_{n-1} + c^I_j h_n\). The ARK method is primarily defined through the coefficients \(A^E \in \mathbb{R}^{s\times s}\), \(A^I \in \mathbb{R}^{s\times s}\), \(b^E \in \mathbb{R}^{s}\), \(b^I \in \mathbb{R}^{s}\), \(c^E \in \mathbb{R}^{s}\) and \(c^I \in \mathbb{R}^{s}\), that correspond with the explicit and implicit Butcher tables. Additional coefficients \(\tilde{b}^E \in \mathbb{R}^{s}\) and \(\tilde{b}^I \in \mathbb{R}^{s}\) are used to construct the embedding \(\tilde{y}_n\). We note that ARKStep currently enforces the constraint that the explicit and implicit methods in an ARK pair must share the same number of stages, \(s\); however it allows the possibility for different explicit and implicit stage times, i.e. \(c^E\) need not equal \(c^I\).

The user of ARKStep must choose appropriately between one of three classes of methods: ImEx, explicit, and implicit. All of ARKode’s available Butcher tables encoding the coefficients \(c^E\), \(c^I\), \(A^E\), \(A^I\), \(b^E\), \(b^I\), \(\tilde{b}^E\) and \(\tilde{b}^I\) are further described in the Appendix: Butcher tables.

For mixed stiff/nonstiff problems, a user should provide both of the functions \(f_E\) and \(f_I\) that define the IVP system. For such problems, ARKStep currently implements the ARK methods proposed in [KC2003], allowing for methods having order of accuracy \(q = \{3,4,5\}\); the tables for these methods are given in the section Additive Butcher tables. Additionally, user-defined ARK tables are supported.

For nonstiff problems, a user may specify that \(f_I = 0\), i.e. the equation (2) reduces to the non-split IVP

(4)\[M\, \dot{y} = f_E(t,y), \qquad y(t_0) = y_0.\]

In this scenario, the coefficients \(A^I=0\), \(c^I=0\), \(b^I=0\) and \(\tilde{b}^I=0\) in (3), and the ARK methods reduce to classical explicit Runge-Kutta methods (ERK). For these classes of methods, ARKode provides coefficients with orders of accuracy \(q = \{2,3,4,5,6,8\}\), with embeddings of orders \(p = \{1,2,3,4,5,7\}\). These default to the Heun-Euler-2-1-2, Bogacki-Shampine-4-2-3, Zonneveld-5-3-4, Cash-Karp-6-4-5, Verner-8-5-6 and Fehlberg-13-7-8 methods, respectively. As with ARK methods, user-defined ERK tables are supported.

Finally, for stiff problems the user may specify that \(f_E = 0\), so the equation (2) reduces to the non-split IVP

(5)\[M\, \dot{y} = f_I(t,y), \qquad y(t_0) = y_0.\]

Similarly to ERK methods, in this scenario the coefficients \(A^E=0\), \(c^E=0\), \(b^E=0\) and \(\tilde{b}^E=0\) in (3), and the ARK methods reduce to classical diagonally-implicit Runge-Kutta methods (DIRK). For these classes of methods, ARKode provides tables with orders of accuracy \(q = \{2,3,4,5\}\), with embeddings of orders \(p = \{1,2,3,4\}\). These default to the SDIRK-2-1-2, ARK-4-2-3 (implicit), SDIRK-5-3-4 and ARK-8-4-5 (implicit) methods, respectively. Again, user-defined DIRK tables are supported.

ERKStep – Explicit Runge-Kutta methods

The ERKStep time-stepping module in ARKode is designed for IVP of the form

(6)\[\dot{y} = f(t,y), \qquad y(t_0) = y_0.\]

For such problems, ERKStep provides variable-step, embedded, explicit Runge-Kutta methods (ERK), corresponding to algorithms of the form

(7)\[\begin{split}z_i &= y_{n-1} + h_n \sum_{j=1}^{i-1} A_{i,j} f(t_{n,j}, z_j), \quad i=1,\ldots,s, \\ y_n &= y_{n-1} + h_n \sum_{i=1}^{s} b_i f(t_{n,i}, z_i), \\ \tilde{y}_n &= y_{n-1} + h_n \sum_{i=1}^{s} \tilde{b}_i f(t_{n,i}, z_i),\end{split}\]

where the variables have the same meanings as in the previous section. We note that the problem (6) is fully encapsulated in the more general problems (4), and that the algorithm (7) is similarly encapsulated in the more general algorithm (3). While it therefore follows that ARKStep can be used to solve every problem solvable by ERKStep, using the same set of methods, we include ERKStep as a distinct time-stepping module since this simplified form admits a more efficient and memory-friendly solution process than when considering the more general form.

Error norms

In the process of controlling errors at various levels (time integration, nonlinear solution, linear solution), the methods in ARKode use a weighted root-mean-square norm, denoted \(\|\cdot\|_\text{WRMS}\), for all error-like quantities,

(8)\[\|v\|_\text{WRMS} = \left( \frac{1}{N} \sum_{i=1}^N \left(v_i\, w_i\right)^2\right)^{1/2}.\]

The utility of this norm arises in the specification of the weighting vector \(w\), that combines the units of the problem with user-supplied values that specify an “acceptable” level of error. To this end, we construct an error weight vector using the most-recent step solution and user-supplied relative and absolute tolerances, namely

(9)\[w_i = \frac{1}{RTOL\cdot |y_{n-1,i}| + ATOL_i}.\]

Since \(1/w_i\) represents a tolerance in the \(i\)-th component of the solution vector \(y\), a vector whose WRMS norm is 1 is regarded as “small.” For brevity, unless specified otherwise we will drop the subscript WRMS on norms in the remainder of this section.

Additionally, for problems involving a non-identity mass matrix, \(M\ne I\), the units of equation (2) may differ from the units of the solution \(y\). In this case, we may additionally construct a residual weight vector,

(10)\[w_i = \frac{1}{RTOL\cdot | \left[M y_{n-1}\right]_i| + ATOL'_i},\]

where the user may specify a separate absolute residual tolerance value or array, \(ATOL'\). The choice of weighting vector used in any given norm is determined by the quantity being measured: values having “solution” units use (9), whereas values having “equation” units use (10). Obviously, for problems with \(M=I\), the solution and equation units are identical, so the solvers in ARKode will use (9) when computing all error norms.

Time step adaptivity

A critical component of IVP “solvers” (rather than just time-steppers) is their adaptive control of local truncation error (LTE). At every step, we estimate the local error, and ensure that it satisfies tolerance conditions. If this local error test fails, then the step is recomputed with a reduced step size. To this end, the Runge-Kutta methods packaged within both the ARKStep and ERKStep modules admit an embedded solution \(\tilde{y}_n\), as shown in equations (3) and (7). Generally, these embedded solutions attain a slightly lower order of accuracy than the computed solution \(y_n\). Denoting the order of accuracy for \(y_n\) as \(q\) and for \(\tilde{y}_n\) as \(p\), most of these embedded methods satisfy \(p = q-1\). These values of \(q\) and \(p\) correspond to the global orders of accuracy for the method and embedding, hence each admit local truncation errors satisfying [HW1993]

(11)\[\begin{split}\| y_n - y(t_n) \| = C h_n^{q+1} + \mathcal O(h_n^{q+2}), \\ \| \tilde{y}_n - y(t_n) \| = D h_n^{p+1} + \mathcal O(h_n^{p+2}),\end{split}\]

where \(C\) and \(D\) are constants independent of \(h_n\), and where we have assumed exact initial conditions for the step, i.e. \(y_{n-1} = y(t_{n-1})\). Combining these estimates, we have

\[\| y_n - \tilde{y}_n \| = \| y_n - y(t_n) - \tilde{y}_n + y(t_n) \| \le \| y_n - y(t_n) \| + \| \tilde{y}_n - y(t_n) \| \le D h_n^{p+1} + \mathcal O(h_n^{p+2}).\]

We therefore use the norm of the difference between \(y_n\) and \(\tilde{y}_n\) as an estimate for the LTE at the step \(n\)

(12)\[M T_n = \beta \left(y_n - \tilde{y}_n\right) = \beta h_n \sum_{i=1}^{s} \left[ \left(b^E_i - \tilde{b}^E_i\right) f_E(t^E_{n,i}, z_i) + \left(b^I_i - \tilde{b}^I_i\right) f_I(t^I_{n,i}, z_i) \right]\]

for ARK methods, and similarly for ERK methods. Here, \(\beta>0\) is an error bias to help account for the error constant \(D\); the default value of this constant is \(\beta = 1.5\), which may be modified by the user.

With this LTE estimate, the local error test is simply \(\|T_n\| < 1\) since this norm includes the user-specified tolerances. If this error test passes, the step is considered successful, and the estimate is subsequently used to estimate the next step size, the algorithms used for this purpose are described below in the section Asymptotic error control. If the error test fails, the step is rejected and a new step size \(h'\) is then computed using the same error controller as for successful steps. A new attempt at the step is made, and the error test is repeated. If the error test fails twice, then \(h'/h\) is limited above to 0.3, and limited below to 0.1 after an additional step failure. After seven error test failures, control is returned to the user with a failure message. We note that all of the constants listed above are only the default values; each may be modified by the user.

We define the step size ratio between a prospective step \(h'\) and a completed step \(h\) as \(\eta\), i.e. \(\eta = h' / h\). This value is subsequently bounded from above by \(\eta_\text{max}\) to ensure that step size adjustments are not overly aggressive. This upper bound changes according to the step and history,

\[\begin{split}\eta_\text{max} = \begin{cases} \text{etamx1}, & \quad\text{on the first step (default is 10000)}, \\ \text{growth}, & \quad\text{on general steps (default is 20)}, \\ 1, & \quad\text{if the previous step had an error test failure}. \end{cases}\end{split}\]

A flowchart detailing how the time steps are modified at each iteration to ensure solver convergence and successful steps is given in the figure below. Here, all norms correspond to the WRMS norm, and the error adaptivity function arkAdapt is supplied by one of the error control algorithms discussed in the subsections below.

_images/time_adaptivity.png

For some problems it may be preferable to avoid small step size adjustments. This can be especially true for problems that construct a Newton Jacobian matrix or a preconditioner for a nonlinear or an iterative linear solve, where this construction is computationally expensive, and where convergence can be seriously hindered through use of an inaccurate matrix. To accommodate these scenarios, the step is left unchanged when \(\eta \in [\eta_L, \eta_U]\). The default values for this interval are \(\eta_L = 1\) and \(\eta_U = 1.5\), and may be modified by the user.

We note that any choices for \(\eta\) (or equivalently, \(h'\)) are subsequently constrained by the optional user-supplied bounds \(h_\text{min}\) and \(h_\text{max}\). Additionally, the time-stepping algorithms in ARKode may similarly limit \(h'\) to adhere to a user-provided “TSTOP” stopping point, \(t_\text{stop}\).

Asymptotic error control

As mentioned above, the time-stepping modules in ARKode adapt the step size in order to attain local errors within desired tolerances of the true solution. These adaptivity algorithms estimate the prospective step size \(h'\) based on the asymptotic local error estimates (11). We define the values \(\varepsilon_n\), \(\varepsilon_{n-1}\) and \(\varepsilon_{n-2}\) as

\[\begin{split}\varepsilon_k &\ \equiv \ \|T_k\| \ = \ \beta \|y_k - \tilde{y}_k\|,\end{split}\]

corresponding to the local error estimates for three consecutive steps, \(t_{n-3} \to t_{n-2} \to t_{n-1} \to t_n\). These local error history values are all initialized to 1 upon program initialization, to accommodate the few initial time steps of a calculation where some of these error estimates have not yet been computed. With these estimates, ARKode supports a variety of error control algorithms, as specified in the subsections below.

PID controller

This is the default time adaptivity controller used by the ARKStep and ERKStep modules. It derives from those found in [KC2003], [S1998], [S2003] and [S2006], and uses all three of the local error estimates \(\varepsilon_n\), \(\varepsilon_{n-1}\) and \(\varepsilon_{n-2}\) in determination of a prospective step size,

\[h' \;=\; h_n\; \varepsilon_n^{-k_1/p}\; \varepsilon_{n-1}^{k_2/p}\; \varepsilon_{n-2}^{-k_3/p},\]

where the constants \(k_1\), \(k_2\) and \(k_3\) default to 0.58, 0.21 and 0.1, respectively, and may be modied by the user. In this estimate, a floor of \(\varepsilon > 10^{-10}\) is enforced to avoid division-by-zero errors.

PI controller

Like with the previous method, the PI controller derives from those found in [KC2003], [S1998], [S2003] and [S2006], but it differs in that it only uses the two most recent step sizes in its adaptivity algorithm,

\[h' \;=\; h_n\; \varepsilon_n^{-k_1/p}\; \varepsilon_{n-1}^{k_2/p}.\]

Here, the default values of \(k_1\) and \(k_2\) default to 0.8 and 0.31, respectively, though they may be changed by the user.

I controller

This is the standard time adaptivity control algorithm in use by most publicly-available ODE solver codes. It bases the prospective time step estimate entirely off of the current local error estimate,

\[h' \;=\; h_n\; \varepsilon_n^{-k_1/p}.\]

By default, \(k_1=1\), but that may be modified by the user.

Explicit Gustafsson controller

This step adaptivity algorithm was proposed in [G1991], and is primarily useful with explicit Runge-Kutta methods. In the notation of our earlier controllers, it has the form

(13)\[\begin{split}h' \;=\; \begin{cases} h_1\; \varepsilon_1^{-1/p}, &\quad\text{on the first step}, \\ h_n\; \varepsilon_n^{-k_1/p}\; \left(\varepsilon_n/\varepsilon_{n-1}\right)^{k_2/p}, & \quad\text{on subsequent steps}. \end{cases}\end{split}\]

The default values of \(k_1\) and \(k_2\) are 0.367 and 0.268, respectively, and may be modified by the user.

Implicit Gustafsson controller

A version of the above controller suitable for implicit Runge-Kutta methods was introduced in [G1994], and has the form

(14)\[\begin{split}h' = \begin{cases} h_1 \varepsilon_1^{-1/p}, &\quad\text{on the first step}, \\ h_n \left(h_n / h_{n-1}\right) \varepsilon_n^{-k_1/p} \left(\varepsilon_n/\varepsilon_{n-1}\right)^{-k_2/p}, & \quad\text{on subsequent steps}. \end{cases}\end{split}\]

The algorithm parameters default to \(k_1 = 0.98\) and \(k_2 = 0.95\), but may be modified by the user.

ImEx Gustafsson controller

An ImEx version of these two preceding controllers is also available. This approach computes the estimates \(h'_1\) arising from equation (13) and the estimate \(h'_2\) arising from equation (14), and selects

\[h' = \frac{h}{|h|}\min\left\{|h'_1|, |h'_2|\right\}.\]

Here, equation (13) uses \(k_1\) and \(k_2\) with default values of 0.367 and 0.268, while equation (14) sets both parameters to the input \(k_3\) that defaults to 0.95. All of these values may be modified by the user.

User-supplied controller

Finally, ARKode’s time-stepping modules allow the user to define their own time step adaptivity function,

\[h' = H(y, t, h_n, h_{n-1}, h_{n-2}, \varepsilon_n, \varepsilon_{n-1}, \varepsilon_{n-2}, q, p),\]

to allow for problem-specific choices, or for continued experimentation with temporal error controllers.

Explicit stability

For problems that involve a nonzero explicit component, i.e. \(f_E(t,y) \ne 0\) in ARKStep or for any problem in ERKStep, explicit and ImEx Runge-Kutta methods may benefit from additional user-supplied information regarding the explicit stability region. All ARKode adaptivity methods utilize estimates of the local error, and it is often the case that such local error control will be sufficient for method stability, since unstable steps will typically exceed the error control tolerances. However, for problems in which \(f_E(t,y)\) includes even moderately stiff components, and especially for higher-order integration methods, it may occur that a significant number of attempted steps will exceed the error tolerances. While these steps will automatically be recomputed, such trial-and-error can result in an unreasonable number of failed steps, increasing the cost of the computation. In these scenarios, a stability-based time step controller may also be useful.

Since the maximum stable explicit step for any method depends on the problem under consideration, in that the value \((h_n\lambda)\) must reside within a bounded stability region, where \(\lambda\) are the eigenvalues of the linearized operator \(\partial f_E / \partial y\), information on the maximum stable step size is not readily available to ARKode’s time-stepping modules. However, for many problems such information may be easily obtained through analysis of the problem itself, e.g. in an advection-diffusion calculation \(f_I\) may contain the stiff diffusive components and \(f_E\) may contain the comparably nonstiff advection terms. In this scenario, an explicitly stable step \(h_\text{exp}\) would be predicted as one satisfying the Courant-Friedrichs-Lewy (CFL) stability condition for the advective portion of the problem,

\[\begin{split}|h_\text{exp}| < \frac{\Delta x}{|\lambda|}\end{split}\]

where \(\Delta x\) is the spatial mesh size and \(\lambda\) is the fastest advective wave speed.

In these scenarios, a user may supply a routine to predict this maximum explicitly stable step size, \(|h_\text{exp}|\). If a value for \(|h_\text{exp}|\) is supplied, it is compared against the value resulting from the local error controller, \(|h_\text{acc}|\), and the eventual time step used will be limited accordingly,

\[h' = \frac{h}{|h|}\min\{c\, |h_\text{exp}|,\, |h_\text{acc}|\}.\]

Here the explicit stability step factor \(c>0\) (often called the “CFL number”) defaults to \(1/2\) but may be modified by the user.

Fixed time stepping

While both the ARKStep and ERKStep time-stepping modules are designed for tolerance-based time step adaptivity, they additionally support a “fixed-step” mode. This mode is typically used for debugging purposes, for verification against hand-coded Runge-Kutta methods, or for problems where the time steps should be chosen based on other problem-specific information. In this mode, all internal time step adaptivity is disabled:

  • temporal error control is disabled,
  • nonlinear or linear solver non-convergence will result in an error (instead of a step size adjustment),
  • no check against an explicit stability condition is performed.

Additional information on this mode is provided in the sections ARKStep Optional Inputs and ERKStep Optional Inputs.

Algebraic solvers

When solving a problem involving either a nonzero implicit component, \(f_I(t,y) \ne 0\), or a non-identity mass matrix, \(M \ne I\), systems of linear or nonlinear algebraic equations must be solved at each stage and/or step of the method. This section therefore focuses on the variety of mathematical methods provided in the ARKode infrastructure for such problems, including nonlinear solvers, linear solvers, preconditioners, iterative solver error control, implicit predictors, and techniques used for simplifying the above solves when using non-time-dependent mass-matrices.

Nonlinear solver methods

For both the DIRK and ARK methods corresponding to (2) and (5), an implicit system

(15)\[G(z_i) \equiv M z_i - h_n A^I_{i,i} f_I(t^I_{n,i}, z_i) - a_i = 0\]

must be solved for each stage \(z_i, i=1,\ldots,s\), where we have the data

\[a_i \equiv \left( y_{n-1} + h_n \sum_{j=1}^{i-1} \left[ A^E_{i,j} f_E(t^E_{n,j}, z_j) + A^I_{i,j} f_I(t^I_{n,j}, z_j) \right] \right)\]

for the ARK methods, or

\[a_i \equiv \left( y_{n-1} + h_n \sum_{j=1}^{i-1} A^I_{i,j} f_I(t^I_{n,j}, z_j) \right)\]

for the DIRK methods. Here, if \(f_I(t,y)\) depends nonlinearly on \(y\) then (15) corresponds to a nonlinear system of equations; if \(f_I(t,y)\) depends linearly on \(y\) then this is a linear system of equations.

For systems of either type, ARKode provides a choice of solution strategies. The default solver choice is a variant of Newton’s method,

(16)\[z_i^{(m+1)} = z_i^{(m)} + \delta^{(m+1)},\]

where \(m\) is the Newton iteration index, and the Newton update \(\delta^{(m+1)}\) in turn requires the solution of the Newton linear system

(17)\[{\mathcal A}\left(t^I_{n,i}, z_i^{(m)}\right)\, \delta^{(m+1)} = -G\left(z_i^{(m)}\right),\]

in which

(18)\[{\mathcal A}(t,z) \approx M - \gamma J(t,z), \quad J(t,z) = \frac{\partial f_I(t,z)}{\partial z}, \quad\text{and}\quad \gamma = h_n A^I_{i,i}.\]

When the problem involves an identity mass matrix, then as an alternative to Newton’s method, ARKode provides a fixed point iteration for solving the stages \(z_i, i=1,\ldots,s\),

(19)\[z_i^{(m+1)} = \Phi\left(z_i^{(m)}\right) \equiv z_i^{(m)} - G\left(z_i^{(m)}\right), \quad m=0,1,\ldots\]

This iteration may additionally be improved using a technique called “Anderson acceleration” [WN2011]. Unlike with Newton’s method, these methods do not require the solution of a linear system at each iteration, instead opting for solution of a low-dimensional least-squares solution to construct the nonlinear update.

Finally, if the user specifies that \(f_I(t,y)\) depends linearly on \(y\), and if the Newton-based nonlinear solver is chosen, then the problem (15) will be solved using only a single Newton iteration. In this case, an additional user-supplied argument indicates whether this Jacobian is time-dependent or not, signaling whether the Jacobian or preconditioner needs to be recomputed at each stage or time step, or if it can be reused throughout the full simulation.

The optimal choice of solver (Newton vs fixed-point) is highly problem dependent. Since fixed-point solvers do not require the solution of any linear systems, each iteration may be significantly less costly than their Newton counterparts. However, this can come at the cost of slower convergence (or even divergence) in comparison with Newton-like methods. On the other hand, these fixed-point solvers do allow for user specification of the Anderson-accelerated subspace size, \(m_k\). While the required amount of solver memory for acceleration grows proportionately to \(m_k N\), larger values of \(m_k\) may result in faster convergence. In our experience, this improvement is most significant for “small” values, e.g. \(1\le m_k\le 5\), and that larger values of \(m_k\) may not result in improved convergence.

While a Newton-based iteration is the default solver due to its increased robustness on very stiff problems, we strongly recommend that users also consider the fixed-point solver when attempting a new problem.

For either the Newton or fixed-point solvers, it is well-known that both the efficiency and robustness of the algorithm intimately depend on the choice of a good initial guess. The initial guess for these solvers is a prediction \(z_i^{(0)}\) that is computed explicitly from previously-computed data (e.g. \(y_{n-2}\), \(y_{n-1}\), and \(z_j\) where \(j<i\)). Additional information on the specific predictor algorithms is provided in the following section, Implicit predictors.

Linear solver methods

When a Newton-based method is chosen for solving each nonlinear system, a linear system of equations must be solved at each nonlinear iteration. For this solve ARKode provides several choices, including the option of a user-supplied linear solver module. The linear solver modules distributed with SUNDIALS are organized into two families: a direct family comprising direct linear solvers for dense, banded or sparse matrices, and a spils family comprising scaled, preconditioned, iterative (Krylov) linear solvers. The methods offered through these modules are as follows:

  • dense direct solvers, using either an internal SUNDIALS implementation or a BLAS/LAPACK implementation (serial version only),
  • band direct solvers, using either an internal SUNDIALS implementation or a BLAS/LAPACK implementation (serial version only),
  • sparse direct solvers, using either the KLU sparse matrix library [KLU], or the OpenMP or PThreads-enabled SuperLU_MT sparse matrix library [SuperLUMT] [Note that users will need to download and install the KLU or SuperLU_MT packages independent of ARKode],
  • SPGMR, a scaled, preconditioned GMRES (Generalized Minimal Residual) solver,
  • SPFGMR, a scaled, preconditioned FGMRES (Flexible Generalized Minimal Residual) solver,
  • SPBCGS, a scaled, preconditioned Bi-CGStab (Bi-Conjugate Gradient Stable) solver,
  • SPTFQMR, a scaled, preconditioned TFQMR (Transpose-free Quasi-Minimal Residual) solver, or
  • PCG, a preconditioned CG (Conjugate Gradient method) solver for symmetric linear systems.

For large stiff systems where direct methods are often infeasible, the combination of an implicit integrator and a preconditioned Krylov method can yield a powerful tool because it combines established methods for stiff integration, nonlinear solver iteration, and Krylov (linear) iteration with a problem-specific treatment of the dominant sources of stiffness, in the form of a user-supplied preconditioner matrix [BH1989]. We note that the direct linear solver modules currently provided by SUNDIALS are only designed to be used with the serial and threaded vector representations.

Matrix-based linear solvers

In the case that a matrix-based linear solver is used, a modified Newton iteration is utilized. In a modified newton iteration, the matrix \({\mathcal A}\) is held fixed for multiple Newton iterations. More precisely, each Newton iteration is computed from the modified equation

(20)\[\tilde{\mathcal A}\left(\tilde{t},\tilde{z}\right)\, \delta^{(m+1)} = -G\left(z_i^{(m)}\right),\]

in which

(21)\[\tilde{\mathcal A}(t,z) \approx M - \tilde{\gamma} J(t,z), \quad\text{and}\quad \tilde{\gamma} = \tilde{h} A^I_{i,i}.\]

Here, the solution \(\tilde{z}\), time \(\tilde{t}\), and step size \(\tilde{h}\) upon which the modified equation rely, are merely values of these quantities from a previous iteration. In other words, the matrix \(\tilde{\mathcal A}\) is only computed rarely, and reused for repeated solves. The frequency at which \(\tilde{\mathcal A}\) is recomputed defaults to 20 time steps, but may be modified by the user.

When using the dense and band SUNMatrix objects for the linear systems (20), the Jacobian \(J\) may be supplied by a user routine, or approximated internally by finite-differences. In the case of differencing, we use the standard approximation

\[J_{i,j}(t,z) \approx \frac{f_{I,i}(t,z+\sigma_j e_j) - f_{I,i}(t,z)}{\sigma_j},\]

where \(e_j\) is the \(j\)-th unit vector, and the increments \(\sigma_j\) are given by

\[\sigma_j = \max\left\{ \sqrt{U}\, |z_j|, \frac{\sigma_0}{w_j} \right\}.\]

Here \(U\) is the unit roundoff, \(\sigma_0\) is a small dimensionless value, and \(w_j\) is the error weight defined in (9). In the dense case, this approach requires \(N\) evaluations of \(f_I\), one for each column of \(J\). In the band case, the columns of \(J\) are computed in groups, using the Curtis-Powell-Reid algorithm, with the number of \(f_I\) evaluations equal to the matrix bandwidth.

We note that with sparse and user-supplied SUNMatrix objects, the Jacobian must be supplied by a user routine.

Matrix-free iterative linear solvers

In the case that a matrix-free iterative linear solver is chosen, an inexact Newton iteration is utilized. Here, the matrix \({\mathcal A}\) is not itself constructed since the algorithms only require the product of this matrix with a given vector. Additionally, each Newton system (17) is not solved completely, since these linear solvers are iterative (hence the “inexact” in the name). As a result. for these linear solvers \({\mathcal A}\) is applied in a matrix-free manner,

\[{\mathcal A}(t,z)\, v = Mv - \gamma\, J(t,z)\, v.\]

The matrix-vector products \(Mv\) must be provided through a user-supplied routine; the matrix-vector products \(Jv\) are obtained by either calling an optional user-supplied routine, or through a finite difference approximation to the directional derivative:

\[J(t,z)\,v \approx \frac{f_I(t,z+\sigma v) - f_I(t,z)}{\sigma},\]

where the increment \(\sigma = 1/\|v\|\) to ensure that \(\|\sigma v\| = 1\).

As with the modified Newton method that reused \({\mathcal A}\) between solves, the inexact Newton iteration may also recompute the preconditioner \(P\) infrequently to balance the high costs of matrix construction and factorization against the reduced convergence rate that may result from a stale preconditioner.

Updating the linear solver

In cases where recomputation of the Newton matrix \(\tilde{\mathcal A}\) or preconditioner \(P\) is lagged, these structures will be recomputed only in the following circumstances:

  • when starting the problem,
  • when more than 20 steps have been taken since the last update (this value may be modified by the user),
  • when the value \(\tilde{\gamma}\) of \(\gamma\) at the last update satisfies \(\left|\gamma/\tilde{\gamma} - 1\right| > 0.2\) (this value may be modified by the user),
  • when a non-fatal convergence failure just occurred,
  • when an error test failure just occurred, or
  • if the problem is linearly implicit and \(\gamma\) has changed by a factor larger than 100 times machine epsilon.

When an update is forced due to a convergence failure, an update of \(\tilde{\mathcal A}\) or \(P\) may or may not involve a re-evaluation of \(J\) (in \(\tilde{\mathcal A}\)) or of Jacobian data (in \(P\)), depending on whether errors in the Jacobian were the likely cause of the failure. More generally, the decision is made to re-evaluate \(J\) (or instruct the user to update \(P\)) when:

  • starting the problem,
  • more than 50 steps have been taken since the last evaluation,
  • a convergence failure occurred with an outdated matrix, and the value \(\tilde{\gamma}\) of \(\gamma\) at the last update satisfies \(\left|\gamma/\tilde{\gamma} - 1\right| > 0.2\),
  • a convergence failure occurred that forced a step size reduction, or
  • if the problem is linearly implicit and \(\gamma\) has changed by a factor larger than 100 times machine epsilon.

However, for linear solvers and preconditioners that do not rely on costly matrix construction and factorization operations (e.g. when using a geometric multigrid method as preconditioner), it may be more efficient to update these structures more frequently than the above heuristics specify, since the increased rate of linear/nonlinear solver convergence may more than account for the additional cost of Jacobian/preconditioner construction. To this end, a user may specify that the system matrix \({\mathcal A}\) and/or preconditioner \(P\) should be recomputed more frequently.

As will be further discussed in the section Preconditioning, in the case of most Krylov methods, preconditioning may be applied on the left, right, or on both sides of \({\mathcal A}\), with user-supplied routines for the preconditioner setup and solve operations.

Iteration Error Control

Nonlinear iteration error control

The stopping test for all of the nonlinear solver algorithms is related to the temporal local error test, with the goal of keeping the nonlinear iteration errors from interfering with local error control. Denoting the final computed value of each stage solution as \(z_i^{(m)}\), and the true stage solution solving (15) as \(z_i\), we want to ensure that the iteration error \(z_i - z_i^{(m)}\) is “small” (recall that a norm less than 1 is already considered within an acceptable tolerance).

To this end, we first estimate the linear convergence rate \(R_i\) of the nonlinear iteration. We initialize \(R_i=1\), and reset it to this value whenever \(\tilde{\mathcal A}\) or \(P\) are updated. After computing a nonlinear correction \(\delta^{(m)} = z_i^{(m)} - z_i^{(m-1)}\), if \(m>0\) we update \(R_i\) as

\[R_i \leftarrow \max\{ 0.3 R_i, \left\|\delta^{(m)}\right\| / \left\|\delta^{(m-1)}\right\| \}.\]

where the factor 0.3 is user-modifiable.

Let \(y_n^{(m)}\) denote the time-evolved solution constructed using our approximate nonlinear stage solutions, \(z_i^{(m)}\), and let \(y_n^{(\infty)}\) denote the time-evolved solution constructed using exact nonlinear stage solutions. We then use the estimate

\[\left\| y_n^{(\infty)} - y_n^{(m)} \right\| \approx \max_i \left\| z_i^{(m+1)} - z_i^{(m)} \right\| \approx \max_i R_i \left\| z_i^{(m)} - z_i^{(m-1)} \right\| = \max_i R_i \left\| \delta^{(m)} \right\|.\]

Therefore our convergence (stopping) test for the nonlinear iteration for each stage is

(22)\[\begin{split}R_i \left\|\delta^{(m)} \right\| < \epsilon,\end{split}\]

where the factor \(\epsilon\) has default value 0.1. We default to a maximum of 3 nonlinear iterations. We also declare the nonlinear iteration to be divergent if any of the ratios \(\|\delta^{(m)}\| / \|\delta^{(m-1)}\| > 2.3\) with \(m>0\). If convergence fails in the fixed point iteration, or in the Newton iteration with \(J\) or \({\mathcal A}\) current, we reduce the step size \(h_n\) by a factor of 0.25. The integration will be halted after 10 convergence failures, or if a convergence failure occurs with \(h_n = h_\text{min}\). However, since the nonlinearity of (15) may vary significantly based on the problem under consideration, these default constants may all be modified by the user.

Linear iteration error control

When a Krylov method is used to solve the linear Newton systems (17), its errors must also be controlled. To this end, we approximate the linear iteration error in the solution vector \(\delta^{(m)}\) using the preconditioned residual vector, e.g. \(r = P{\mathcal A}\delta^{(m)} + PG\) for the case of left preconditioning (the role of the preconditioner is further elaborated in the next section). In an attempt to ensure that the linear iteration errors do not interfere with the nonlinear solution error and local time integration error controls, we require that the norm of the preconditioned linear residual satisfies

(23)\[\|r\| \le \frac{\epsilon_L \epsilon}{10}.\]

Here \(\epsilon\) is the same value as that is used above for the nonlinear error control. The factor of 10 is used to ensure that the linear solver error does not adversely affect the nonlinear solver convergence. Smaller values for the parameter \(\epsilon_L\) are typically useful for strongly nonlinear or very stiff ODE systems, while easier ODE systems may benefit from a value closer to 1. The default value is \(\epsilon_L = 0.05\), which may be modified by the user. We note that for linearly implicit problems the tolerance (23) is similarly used for the single Newton iteration.

Preconditioning

When using an inexact Newton method to solve the nonlinear system (15), an iterative method is used repeatedly to solve linear systems of the form \({\mathcal A}x = b\), where \(x\) is a correction vector and \(b\) is a residual vector. If this iterative method is one of the scaled preconditioned iterative linear solvers supplied with SUNDIALS, their efficiency may benefit tremendously from preconditioning. A system \({\mathcal A}x=b\) can be preconditioned using any one of:

\[\begin{split}(P^{-1}{\mathcal A})x = P^{-1}b & \qquad\text{[left preconditioning]}, \\ ({\mathcal A}P^{-1})Px = b & \qquad\text{[right preconditioning]}, \\ (P_L^{-1} {\mathcal A} P_R^{-1}) P_R x = P_L^{-1}b & \qquad\text{[left and right preconditioning]}.\end{split}\]

These Krylov iterative methods are then applied to a system with the matrix \(P^{-1}{\mathcal A}\), \({\mathcal A}P^{-1}\), or \(P_L^{-1} {\mathcal A} P_R^{-1}\), instead of \({\mathcal A}\). In order to improve the convergence of the Krylov iteration, the preconditioner matrix \(P\), or the product \(P_L P_R\) in the third case, should in some sense approximate the system matrix \({\mathcal A}\). Simultaneously, in order to be cost-effective the matrix \(P\) (or matrices \(P_L\) and \(P_R\)) should be reasonably efficient to evaluate and solve. Finding an optimal point in this trade-off between rapid convergence and low cost can be quite challenging. Good choices are often problem-dependent (for example, see [BH1989] for an extensive study of preconditioners for reaction-transport systems).

Most of the iterative linear solvers supplied with SUNDIALS allow for all three types of preconditioning (left, right or both), although for non-symmetric matrices \({\mathcal A}\) we know of few situations where preconditioning on both sides is superior to preconditioning on one side only (with the product \(P = P_L P_R\)). Moreover, for a given preconditioner matrix, the merits of left vs. right preconditioning are unclear in general, so we recommend that the user experiment with both choices. Performance can differ between these since the inverse of the left preconditioner is included in the linear system residual whose norm is being tested in the Krylov algorithm. As a rule, however, if the preconditioner is the product of two matrices, we recommend that preconditioning be done either on the left only or the right only, rather than using one factor on each side. An exception to this rule is the PCG solver, that itself assumes a symmetric matrix \({\mathcal A}\), since the PCG algorithm in fact applies the single preconditioner matrix \(P\) in both left/right fashion as \(P^{-1/2} {\mathcal A} P^{-1/2}\).

Typical preconditioners are based on approximations to the system Jacobian, \(J = \partial f_I / \partial y\). Since the Newton iteration matrix involved is \({\mathcal A} = M - \gamma J\), any approximation \(\bar{J}\) to \(J\) yields a matrix that is of potential use as a preconditioner, namely \(P = M - \gamma \bar{J}\). Because the Krylov iteration occurs within a Newton iteration and further also within a time integration, and since each of these iterations has its own test for convergence, the preconditioner may use a very crude approximation, as long as it captures the dominant numerical features of the system. We have found that the combination of a preconditioner with the Newton-Krylov iteration, using even a relatively poor approximation to the Jacobian, can be surprisingly superior to using the same matrix without Krylov acceleration (i.e., a modified Newton iteration), as well as to using the Newton-Krylov method with no preconditioning.

Implicit predictors

For problems with implicit components, a prediction algorithm is employed for constructing the initial guesses for each implicit Runge-Kutta stage, \(z_i^{(0)}\). As is well-known with nonlinear solvers, the selection of a good initial guess can have dramatic effects on both the speed and robustness of the solve, making the difference between rapid quadratic convergence versus divergence of the iteration. To this end, a variety of prediction algorithms are provided. In each case, the stage guesses \(z_i^{(0)}\) are constructed explicitly using readily-available information, including the previous step solutions \(y_{n-1}\) and \(y_{n-2}\), as well as any previous stage solutions \(z_j, \quad j<i\). In most cases, prediction is performed by constructing an interpolating polynomial through existing data, which is then evaluated at the desired stage time to provide an inexpensive but (hopefully) reasonable prediction of the stage solution. Specifically, for most Runge-Kutta methods each stage solution satisfies

\[z_i \approx y(t^I_{n,i}),\]

so by constructing an interpolating polynomial \(p_q(t)\) through a set of existing data, the initial guess at stage solutions may be approximated as

(24)\[z_i^{(0)} = p_q(t^I_{n,i}).\]

As the stage times for implicit ARK and DIRK stages usually satisfy \(c_j^I > 0\), it is typically the case that \(t^I_{n,j}\) is outside of the time interval containing the data used to construct \(p_q(t)\), hence (24) will correspond to an extrapolant instead of an interpolant. The dangers of using a polynomial interpolant to extrapolate values outside the interpolation interval are well-known, with higher-order polynomials and predictions further outside the interval resulting in the greatest potential inaccuracies.

The prediction algorithms available in ARKode therefore construct a variety of interpolants \(p_q(t)\), having different polynomial order and using different interpolation data, to support ‘optimal’ choices for different types of problems, as described below.

Trivial predictor

The so-called “trivial predictor” is given by the formula

\[p_0(t) = y_{n-1}.\]

While this piecewise-constant interpolant is clearly not a highly accurate candidate for problems with time-varying solutions, it is often the most robust approach for highly stiff problems, or for problems with implicit constraints whose violation may cause illegal solution values (e.g. a negative density or temperature).

Maximum order predictor

At the opposite end of the spectrum, ARKode’s interpolation module can be used to construct a higher-order polynomial interpolant, \(p_q(t)\), based on the two most-recently-computed solutions, \(\left\{ y_{n-2}, f_{n-2}, y_{n-1}, f_{n-1} \right\}\). This can then be used to extrapolate predicted stage solutions for each stage time \(t^I_{n,i}\). This polynomial order is the same as that specified by the user for dense output.

Variable order predictor

This predictor attempts to use higher-order polynomials \(p_q(t)\) for predicting earlier stages, and lower-order interpolants for later stages. It uses the same interpolation module as described above, but chooses \(q\) adaptively based on the stage index \(i\), under the (rather tenuous) assumption that the stage times are increasing, i.e. \(c^I_j < c^I_k\) for \(j<k\):

\[q = \max\{ q_\text{max} - i,\; 1 \}.\]

Cutoff order predictor

This predictor follows a similar idea as the previous algorithm, but monitors the actual stage times to determine the polynomial interpolant to use for prediction. Denoting \(\tau = c_i^I \frac{h_n}{h_{n-1}}\), the polynomial degree \(q\) is chosen as:

\[\begin{split}q = \begin{cases} q_\text{max}, & \text{if}\quad \tau < \tfrac12,\\ 1, & \text{otherwise}. \end{cases}\end{split}\]

Bootstrap predictor

This predictor does not use any information from the preceding step, instead using information only within the current step \([t_{n-1},t_n]\). In addition to using the solution and ODE right-hand side function, \(y_{n-1}\) and \(f(t_{n-1},y_{n-1})\), this approach uses the right-hand side from a previously computed stage solution in the same step, \(f(t_{n-1}+c^I_j h,z_j)\) to construct a quadratic Hermite interpolant for the prediction. If we define the constants \(\tilde{h} = c^I_j h\) and \(\tau = c^I_i h\), the predictor is given by

\[z_i^{(0)} = y_{n-1} + \left(\tau - \frac{\tau^2}{2\tilde{h}}\right) f(t_{n-1},y_{n-1}) + \frac{\tau^2}{2\tilde{h}} f(t_{n-1}+\tilde{h},z_j).\]

For stages without a nonzero preceding stage time, i.e. \(c^I_j\ne 0\) for \(j<i\), this method reduces to using the trivial predictor \(z_i^{(0)} = y_{n-1}\). For stages having multiple preceding nonzero \(c^I_j\), we choose the stage having largest \(c^I_j\) value, to minimize the level of extrapolation used in the prediction.

We note that in general, each stage solution \(z_j\) has significantly worse accuracy than the time step solutions \(y_{n-1}\), due to the difference between the stage order and the method order in Runge-Kutta methods. As a result, the accuracy of this predictor will generally be rather limited, but it is provided for problems in which this increased stage error is better than the effects of extrapolation far outside of the previous time step interval \([t_{n-2},t_{n-1}]\).

We further note that although this method could be used with non-identity mass matrix \(M\ne I\), support for that mode is not currently implemented, so selection of this predictor in the case that \(M\ne I\) will result in use of the trivial predictor.

Minimum correction predictor

The last predictor is not interpolation based; instead it utilizes all existing stage information from the current step to create a predictor containing all but the current stage solution. Specifically, as discussed in equations (3) and (15), each stage solves a nonlinear equation

\[\begin{split}z_i &= y_{n-1} + h_n \sum_{j=1}^{i-1} A^E_{i,j} f_E(t^E_{n,j}, z_j) + h_n \sum_{j=1}^{i} A^I_{i,j} f_I(t^I_{n,j}, z_j), \\ \Leftrightarrow \qquad \qquad & \\ G(z_i) &\equiv z_i - h_n A^I_{i,i} f_I(t^I_{n,i}, z_i) - a_i = 0.\end{split}\]

This prediction method merely computes the predictor \(z_i\) as

\[\begin{split}z_i &= y_{n-1} + h_n \sum_{j=1}^{i-1} A^E_{i,j} f_E(t^E_{n,j}, z_j) + h_n \sum_{j=1}^{i-1} A^I_{i,j} f_I(t^I_{n,j}, z_j), \\ \Leftrightarrow \quad \qquad & \\ z_i &= a_i.\end{split}\]

We again note that although this method could be used with non-identity mass matrix \(M\ne I\), support for that mode is not currently implemented, so selection of this predictor in the case that \(M\ne I\) will result in use of the trivial predictor.

Mass matrix solver

Within the algorithms described above, there are multiple locations where a matrix-vector product

(25)\[b = M v\]

or a linear solve

(26)\[x = M^{-1} b\]

are required.

Of course, for problems in which \(M=I\) both of these operators are trivial. However for problems with non-identity \(M\), these linear solves (26) may be handled using any valid linear solver module, in the same manner as described in the section Linear solver methods for solving the linear Newton systems.

At present, for DIRK and ARK problems using a matrix-based solver for the Newton nonlinear iterations, the type of matrix (dense, band, sparse, or custom) for the Jacobian matrix \(J\) must match the type of mass matrix \(M\), since these are combined to form the Newton system matrix \(\tilde{\mathcal A}\). When matrix-based methods are employed, the user must supply a routine to compute \(M\) in the appropriate form to match the structure of \({\mathcal A}\), with a user-supplied routine of type ARKLsMassFn(). This matrix structure is used internally to perform any requisite mass matrix-vector products (25).

When matrix-free methods are selected, a routine must be supplied to perform the mass-matrix-vector product, \(Mv\). As with iterative solvers for the Newton systems, preconditioning may be applied to aid in solution of the mass matrix systems (26). When using an iterative mass matrix linear solver, we require that the norm of the preconditioned linear residual satisfies

(27)\[\|r\| \le \epsilon_L \epsilon,\]

where again, \(\epsilon\) is the nonlinear solver tolerance parameter from (22). When using iterative system and mass matrix linear solvers, \(\epsilon_L\) may be specified separately for both tolerances (23) and (27).

In the above algorithmic description there are three locations where a linear solve of the form (26) is required: (a) in constructing the time-evolved solution \(y_n\), (b) in estimating the local temporal truncation error, and (c) in constructing predictors for the implicit solver iteration (see section Maximum order predictor). Specifically, to construct the time-evolved solution \(y_n\) from equation (3) we must solve

\[\begin{split}&M y_n \ = \ M y_{n-1} + h_n \sum_{i=1}^{s} \left( b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right), \\ \Leftrightarrow \qquad & \\ &M (y_n -y_{n-1}) \ = \ h_n \sum_{i=1}^{s} \left(b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right), \\ \Leftrightarrow \qquad & \\ &M \nu \ = \ h_n \sum_{i=1}^{s} \left(b^E_i f_E(t^E_{n,i}, z_i) + b^I_i f_I(t^I_{n,i}, z_i)\right),\end{split}\]

for the update \(\nu = y_n - y_{n-1}\). For construction of the stages \(z_i\) this requires no mass matrix solves (as these are included in the nonlinear system solve). Similarly, in computing the local temporal error estimate \(T_n\) from equation (12) we must solve systems of the form

(28)\[M\, T_n = h \sum_{i=1}^{s} \left[ \left(b^E_i - \tilde{b}^E_i\right) f_E(t^E_{n,i}, z_i) + \left(b^I_i - \tilde{b}^I_i\right) f_I(t^I_{n,i}, z_i) \right].\]

Lastly, in constructing dense output and implicit predictors of order 2 or higher (as in the section Maximum order predictor above), we must compute the derivative information \(f_k\) from the equation

\[M f_k = f_E(t_k, y_k) + f_I(t_k, y_k).\]

In total, these require only two mass-matrix linear solves (26) per attempted time step, with one more upon completion of a time step that meets the solution accuracy requirements. When fixed time-stepping is used (\(h_n=h\)), the solve (28) is not performed at each attempted step.

Rootfinding

Many of the time-stepping modules in ARKode also support a rootfinding feature. This means that, while integrating the IVP (1), these can also find the roots of a set of user-defined functions \(g_i(t,y)\) that depend on \(t\) and the solution vector \(y = y(t)\). The number of these root functions is arbitrary, and if more than one \(g_i\) is found to have a root in any given interval, the various root locations are found and reported in the order that they occur on the \(t\) axis, in the direction of integration.

Generally, this rootfinding feature finds only roots of odd multiplicity, corresponding to changes in sign of \(g_i(t, y(t))\), denoted \(g_i(t)\) for short. If a user root function has a root of even multiplicity (no sign change), it will almost certainly be missed due to the realities of floating-point arithmetic. If such a root is desired, the user should reformulate the root function so that it changes sign at the desired root.

The basic scheme used is to check for sign changes of any \(g_i(t)\) over each time step taken, and then (when a sign change is found) to home in on the root (or roots) with a modified secant method [HS1980]. In addition, each time \(g\) is evaluated, ARKode checks to see if \(g_i(t) = 0\) exactly, and if so it reports this as a root. However, if an exact zero of any \(g_i\) is found at a point \(t\), ARKode computes \(g(t+\delta)\) for a small increment \(\delta\), slightly further in the direction of integration, and if any \(g_i(t+\delta) = 0\) also, ARKode stops and reports an error. This way, each time ARKode takes a time step, it is guaranteed that the values of all \(g_i\) are nonzero at some past value of \(t\), beyond which a search for roots is to be done.

At any given time in the course of the time-stepping, after suitable checking and adjusting has been done, ARKode has an interval \((t_\text{lo}, t_\text{hi}]\) in which roots of the \(g_i(t)\) are to be sought, such that \(t_\text{hi}\) is further ahead in the direction of integration, and all \(g_i(t_\text{lo}) \ne 0\). The endpoint \(t_\text{hi}\) is either \(t_n\), the end of the time step last taken, or the next requested output time \(t_\text{out}\) if this comes sooner. The endpoint \(t_\text{lo}\) is either \(t_{n-1}\), or the last output time \(t_\text{out}\) (if this occurred within the last step), or the last root location (if a root was just located within this step), possibly adjusted slightly toward \(t_n\) if an exact zero was found. The algorithm checks \(g(t_\text{hi})\) for zeros, and it checks for sign changes in \((t_\text{lo}, t_\text{hi})\). If no sign changes are found, then either a root is reported (if some \(g_i(t_\text{hi}) = 0\)) or we proceed to the next time interval (starting at \(t_\text{hi}\)). If one or more sign changes were found, then a loop is entered to locate the root to within a rather tight tolerance, given by

\[\tau = 100\, U\, (|t_n| + |h|)\qquad (\text{where}\; U = \text{unit roundoff}).\]

Whenever sign changes are seen in two or more root functions, the one deemed most likely to have its root occur first is the one with the largest value of \(\left|g_i(t_\text{hi})\right| / \left| g_i(t_\text{hi}) - g_i(t_\text{lo})\right|\), corresponding to the closest to \(t_\text{lo}\) of the secant method values. At each pass through the loop, a new value \(t_\text{mid}\) is set, strictly within the search interval, and the values of \(g_i(t_\text{mid})\) are checked. Then either \(t_\text{lo}\) or \(t_\text{hi}\) is reset to \(t_\text{mid}\) according to which subinterval is found to have the sign change. If there is none in \((t_\text{lo}, t_\text{mid})\) but some \(g_i(t_\text{mid}) = 0\), then that root is reported. The loop continues until \(\left|t_\text{hi} - t_\text{lo} \right| < \tau\), and then the reported root location is \(t_\text{hi}\). In the loop to locate the root of \(g_i(t)\), the formula for \(t_\text{mid}\) is

\[t_\text{mid} = t_\text{hi} - \frac{g_i(t_\text{hi}) (t_\text{hi} - t_\text{lo})}{g_i(t_\text{hi}) - \alpha g_i(t_\text{lo})} ,\]

where \(\alpha\) is a weight parameter. On the first two passes through the loop, \(\alpha\) is set to 1, making \(t_\text{mid}\) the secant method value. Thereafter, \(\alpha\) is reset according to the side of the subinterval (low vs high, i.e. toward \(t_\text{lo}\) vs toward \(t_\text{hi}\)) in which the sign change was found in the previous two passes. If the two sides were opposite, \(\alpha\) is set to 1. If the two sides were the same, \(\alpha\) is halved (if on the low side) or doubled (if on the high side). The value of \(t_\text{mid}\) is closer to \(t_\text{lo}\) when \(\alpha < 1\) and closer to \(t_\text{hi}\) when \(\alpha > 1\). If the above value of \(t_\text{mid}\) is within \(\tau /2\) of \(t_\text{lo}\) or \(t_\text{hi}\), it is adjusted inward, such that its fractional distance from the endpoint (relative to the interval size) is between 0.1 and 0.5 (with 0.5 being the midpoint), and the actual distance from the endpoint is at least \(\tau/2\).

Finally, we note that when running in parallel, ARKode’s rootfinding module assumes that the entire set of root defining functions \(g_i(t,y)\) is replicated on every MPI task. Since in these cases the vector \(y\) is distributed across tasks, it is the user’s responsibility to perform any necessary inter-task communication to ensure that \(g_i(t,y)\) is identical on each task.