ODE Solvers

ODE Solvers


Solves the ODE defined by prob using the algorithm alg. If no algorithm is given, a default algorithm will be chosen.

Recommended Methods

It is suggested that you try choosing an algorithm using the alg_hints keyword argument. However, in some cases you may want something specific, or you may just be curious. This guide is to help you choose the right algorithm.

Unknown Stiffness Problems

When the stiffness of the problem is unknown, it is recommended you use a stiffness detection and auto-switching algorithm. These methods are multi-paradigm and allow for efficient solution of both stiff and non-stiff problems. The cost for auto-switching is very minimal but the choices are restrained and so they are a good go-to method when applicable.

For default tolerances, AutoTsit5(Rosenbrock23()) is a good choice. For lower tolerances, using AutoVern7 or AutoVern9 with Rodas4, KenCarp4, or Rodas5 can all be good choices depending on the problem. For very large systems (>1000 ODEs?), consider using lsoda.

Non-Stiff Problems

For non-stiff problems, the native OrdinaryDiffEq.jl algorithms are vastly more efficient than the other choices. For most non-stiff problems, we recommend Tsit5. When more robust error control is required, BS5 is a good choice. If at moderate tolerances an the interpolation error is very important, consider the OwrenZen5 method. For fast solving at higher tolerances, we recommend BS3 (or OwrenZen3 if the interpolation error is important). For high accuracy but with the range of Float64 (~1e-8-1e-12), we recommend Vern6, Vern7, or Vern8 as efficient choices.

For high accuracy non-stiff solving (BigFloat and tolerances like <1e-12), we recommend the Vern9 method. If a high-order method is needed with a high order interpolant, then you should choose Vern9 which is Order 9 with an Order 9 interpolant. If you need extremely high accuracy (<1e-30?) and do not need an interpolant, try the Feagin12 or Feagin14 methods. Note that the Feagin methods are the only high-order optimized methods which do not include a high-order interpolant (they do include a 3rd order Hermite interpolation if needed). Note that these high order RK methods are more robust than the high order Adams-Bashforth methods to discontinuities and achieve very high precision, and are much more efficient than the extrapolation methods. However, the VCABM method can be a good choice for high accuracy when the system of equations is very large (>1,000 ODEs?), the function calculation is very expensive, or the solution is very smooth.

If strict error bounds are needed, then adaptive methods with defect controls are required. Defect controls use an error measurement on the interpolating polynomial to make the error estimate better capture the error over the full interval. For medium accuracy calculations, RK4 is a good choice.

Stiff Problems

For stiff problems at high tolerances (>1e-2?) it is recommended that you use Rosenbrock23 or TRBDF2. These are robust to oscillations and massive stiffness is needed, though are only efficient when low accuracy is needed. Rosenbrock23 is more efficient for small systems where re-evaluating and re-factorizing the Jacobian is not too costly, and for sufficiently large systems TRBDF2 will be more efficient. ABDF2 can be the most efficient the largest systems or most expensive f.

At medium tolerances (>1e-8?) it is recommended you use Rodas5, Rodas4P (the former is more efficient but the later is more reliable), Kvaerno5, or KenCarp4. As native DifferentialEquations.jl solvers, many Julia numeric types (such as BigFloats, ArbFloats, or DecFP) will work. When the equation is defined via the @ode_def macro, these will be the most efficient.

For faster solving at low tolerances (<1e-9) but when Vector{Float64} is used, use radau.

For asymptotically large systems of ODEs (N>1000?) where f is very costly and the complex eigenvalues are minimal (low oscillations), in that case CVODE_BDF will be the most efficient but requires Vector{Float64}. CVODE_BDF will also do surprisingly well if the solution is smooth. However, this method can be less stiff than other methods and stuff may fail at low accuracy situations. Another good choice for this regime is lsoda.

Special Properties of Stiff Integrators

ImplicitMidpoint is a symmetric and symplectic integrator. Trapezoid is a symmetric (almost symplectic) integrator with adaptive timestepping. ImplicitEuler is an extension to the common algorithm with adaptive timestepping and efficient quasi-Newton Jacobian re-usage which is fully strong-stability presurving (SSP) for hyperbolic PDEs.

Notice that Rodas4 loses accuracy on discretizations of nonlinear parabolic PDEs, and thus it's suggested you replace it with Rodas4P in those situations which is 3rd order. ROS3P is only third order and achieves 3rd order on such problems and can thus be more efficient in this case.

Translations from MATLAB/Python/R

For users familiar with MATLAB/Python/R, good translations of the standard library methods are as follows:

Full List of Methods


Unless otherwise specified, the OrdinaryDiffEq algorithms all come with a 3rd order Hermite polynomial interpolation. The algorithms denoted as having a "free" interpolation means that no extra steps are required for the interpolation. For the non-free higher order interpolating functions, the extra steps are computed lazily (i.e. not during the solve).

The OrdinaryDiffEq.jl algorithms achieve the highest performance for non-stiff equations while being the most generic: accepting the most Julia-based types, allow for sophisticated event handling, etc. On stiff ODEs these algorithms again consistently among the top. OrdinaryDiffEq.jl is recommended for most ODE problems.

Runge-Kutta Methods for Non-Stiff Equations

Example usage:

alg = Tsit5()

Additionally, the following algorithms have a lazy interpolant:

These methods require a few extra steps in order to compute the high order interpolation, but these steps are only taken when the interpolation is used. These methods when lazy assume that the parameter vector p will be unchanged between the moment of the interval solving and the interpolation. If p is changed in a ContinuousCallback, or in a DiscreteCallback and the continuous solution is used after the full solution, then set lazy=false.


solve(prob,Vern7()) # lazy by default

Parallel Explicit Runge-Kutta Methods

These methods utilize multithreading on the f calls to parallelize the problem. This requires that simultanious calls to f are thread-safe.

Explicit Strong-Stability Preserving Runge-Kutta Methods for Hyperbolic PDEs (Conservation Laws)

The SSP coefficients of the methods can be queried as ssp_coefficient(alg). All explicit SSP methods take two optional arguments SSPXY(stage_limiter!, step_limiter!), where stage_limiter! and step_limiter are functions taking arguments of the form limiter!(u, f, t). Here, u is the new solution value (updated inplace) after an explicit Euler stage / the whole time step , f the time derivative function (semidiscretisation for PDEs), and t the current time. These limiters can be used to enforce physical constraints, e.g. the positivity preserving limiters of Zhang and Shu (Zhang, Xiangxiong, and Chi-Wang Shu. "Maximum-principle-satisfying and positivity-preserving high-order schemes for conservation laws: survey and new developments." Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. The Royal Society, 2011.).

Low-Storage Methods

NOTE: All the 2N Methods (ORK256, CarpenterKennedy2N54, NDBLSRK124, NDBLSRK134, NDBLSRK144, DGLDDRK73_C, DGLDDRK84_C, DGLDDRK84_F and HSLDDRK64) work on the basic principle of being able to perform step S1 = S1 + F(S2) in just 2 registers. Certain optimizations have been done to achieve this theoritical limit (when alias_u0 is set) but have a limitation that du should always be on the left hand side (assignments only) in the implementation.

Example - This is an invalid implementation for 2N methods:

function f(du,u,p,t)
  du[1] = u[1] * u[2]
  du[2] = du[1] * u[2] # du appears on the RHS

If you don't wish to have the optimization and have to use du on the RHS, please set the keyword argument williamson_condition to false in the algorithm (by default it is set to true). In this case 3 registers worth memory would be needed instead.

Example :

alg = CarpenterKennedy2N54(;williamson_condition=false)

So the above implementation of f becomes valid.

Extrapolation Methods

The following are adaptive order, adaptive step size extrapolation methods:

These methods have arguments for max_order, min_order, and init_order on the adaptive order algorithm. These methods also have an argument for enabling threading for calculation. The defaults are:

Additionally, the ExtrapolationMidpointDeuflhard and ExtrapolationMidpointHairerWanner methods have the additional argument:

sequence. Possible values are :harmonic, :romberg or :bulirsch. Default is :harmonic.

To override, utilize the keyword arguments. For example:

alg = ExtrapolationMidpointDeuflhard(max_order=7,min_order=4,init_order=4,sequence=:bulirsch,threading=false)

Note that the order that is referred to is the extrapolation order. For AitkenNevillie this is the order of the method, for the others an extrapolation order of n gives an order 2(n+1) method.

Explicit Multistep Methods

Methods using the approximation at more than one previous mesh point to determine the approximation at the next point are called multistep methods. These methods tend to be more efficient as the size of the system or the cost of f increases.

Adams-Bashforth Explicit Methods

These methods require a choice of dt.

Adaptive step size Adams explicit Methods

Methods for Stiff Equations

SDIRK Methods
Fully-Implicit Runge-Kutta Methods (FIRK)
Rosenbrock Methods
Stabilized Explicit Methods

ROCK methods offer a min_stages and max_stages functionality. SERK methods derive higher orders by Aitken-Neville algorithm. SERK2v2 is defaulted to Predictive control but has option of PI control.

Exponential Methods for Linear and Affine Problems


Exponential Runge-Kutta Methods

These methods are all fixed timestepping only.

The methods are intended for semilinear problems constructed by SplitODEProblem or SplitODEFunction. They can also be used for a general nonlinear problem, in which case the jacobian of the right hand side is used as the linear operator in each time step.

Except for ETD2, all methods come with these options, which can be set in the methods' constructor:

Adaptive Exponential Rosenbrock Methods

The exponential rosenbrock methods cannot be applied to semilinear problems. Options for the solvers are the same as Exponential Runge-Kutta Methods except that Krylov approximation is always used.

Exponential Propagation Iterative Runge-Kutta Methods (EPIRK)

These methods are all fixed timestepping only.


It should be noted that many of the methods are still at an experimental stage of development, and thus should be used with caution.

Multistep Methods

Quasi-constant stepping is the time stepping strategy which matches the classic GEAR, LSODE, and ode15s integrators. The variable-coefficient methods match the ideas of the classic EPISODE integrator and early VODE designs. The Fixed Leading Coefficient (FLC) methods match the behavior of the classic VODE and Sundials CVODE integrator.

Implicit Strong-Stability Preserving Runge-Kutta Methods for Hyperbolic PDEs (Conservation Laws)
Extra Options

All of the Rosenbrock and SDIRK methods allow for specification of linsolve: the linear solver which is used. For more information on specifying the linear solver, see the manual page on solver specification.

The following methods allow for specification of nlsolve: the nonlinear solver which is used:

Note that performance overload information (Jacobians etc.) are not used in this mode. This can control autodifferentiation of the Jacobian as well. For more information on specifying the nonlinear solver, see the manual page on solver specification.

Additionally, the Rosenbrock and SDIRK methods have differentiation controls. In each of these, autodiff can be set to turn on/off autodifferentiation, and chunk_size can be used to set the chunksize of the Dual numbers (see the documentation for ForwardDiff.jl for details). In addition, the Rosenbrock and SDIRK methods can set diff_type, which is the type of numerical differentiation that is used (when autodifferentiation is disabled). The choices are Val{:central}, Val{:forward} or Val{:complex}.


sol = solve(prob,Rosenbrock23()) # Standard, uses autodiff
sol = solve(prob,Rosenbrock23(chunk_size=10)) # Autodiff with chunksize of 10
sol = solve(prob,Rosenbrock23(autodiff=false)) # Numerical differentiation with central differencing
sol = solve(prob,Rosenbrock23(autodiff=false,diff_type=Val{:forward})) # Numerical differentiation with forward differencing

Tableau Method

Additionally, there is the tableau method:

Example usage:

alg = ExplicitRK(tableau=constructDormandPrince())


One unique feature of OrdinaryDiffEq.jl is the CompositeAlgorithm, which allows you to, with very minimal overhead, design a multimethod which switches between chosen algorithms as needed. The syntax is CompositeAlgorithm(algtup,choice_function) where algtup is a tuple of OrdinaryDiffEq.jl algorithms, and choice_function is a function which declares which method to use in the following step. For example, we can design a multimethod which uses Tsit5() but switches to Vern7() whenever dt is too small:

choice_function(integrator) = (Int(integrator.dt<0.001) + 1)
alg_switch = CompositeAlgorithm((Tsit5(),Vern7()),choice_function)

The choice_function takes in an integrator and thus all of the features available in the Integrator Interface can be used in the choice function.

A helper algorithm was created for building 2-method automatic switching for stiffness detection algorithms. This is the AutoSwitch algorithm with the following options:

AutoSwitch(nonstiffalg::nAlg, stiffalg::sAlg;
           maxstiffstep=10, maxnonstiffstep=3,
           nonstifftol::T=9//10, stifftol::T=9//10,
           dtfac=2.0, stiffalgfirst=false)

The nonstiffalg must have an appropriate stiffness estimate built into the method. The stiffalg can receive its estimate from the Jacobian calculation. maxstiffstep is the number of stiffness detects before switching to the stiff algorithm and maxnonstiffstep is vice versa. nonstifftol and stifftol are the tolerances associated with the stiffness comparison against the stability region. dtfac is the factor that dt is changed when switching: multiplied when going from non-stiff to stiff and divided when going stiff to non-stiff. stiffalgfirst denotes whether the first step should use the stiff algorithm.

Pre-Built Stiffness Detecting and Auto-Switching Algorithms

These methods require a Autoalg(stiffalg) to be chosen as the method to switch to when the ODE is stiff. It can be any of the OrdinaryDiffEq.jl one-step stiff methods and has all of the arguments of the AutoSwitch algorithm.


tsidas_alg = AutoTsit5(Rodas5())
sol = solve(prob,tsidas_alg)

tsidas_alg = AutoTsit5(Rodas5(),nonstifftol = 11/10)

Is the Tsit5 method with automatic switching to Rodas5.


Note that this setup is not automatically included with DifferentialEquations.jl. To use the following algorithms, you must install and use Sundials.jl:

]add Sundials
using Sundials

The Sundials suite is built around multistep methods. These methods are more efficient than other methods when the cost of the function calculations is really high, but for less costly functions the cost of nurturing the timestep overweighs the benefits. However, the BDF method is a classic method for stiff equations and "generally works".

The Sundials algorithms all come with a 3rd order Hermite polynomial interpolation. Note that the constructors for the Sundials algorithms take two main arguments:

The choices for the linear solver are:


CVODE_BDF() # BDF method using Newton + Dense solver
CVODE_BDF(method=:Functional) # BDF method using Functional iterations
CVODE_BDF(linear_solver=:Band,jac_upper=3,jac_lower=3) # Banded solver with nonzero diagonals 3 up and 3 down
CVODE_BDF(linear_solver=:BCG) # Biconjugate gradient method                                   

The main options for ARKODE are the choice between explicit and implicit and the method order, given via:

ARKODE(Sundials.Explicit()) # Solve with explicit tableau of default order 4
ARKODE(Sundials.Implicit(),order = 3) # Solve with explicit tableau of order 3

The order choices for explicit are 2 through 8 and for implicit 3 through 5. Specific methods can also be set through the etable and itable options for explicit and implicit tableaus respectively. The available tableaus are:



These can be set for example via:

ARKODE(Sundials.Explicit(),etable = Sundials.DORMAND_PRINCE_7_4_5)
ARKODE(Sundials.Implicit(),itable = Sundials.KVAERNO_4_2_3)

All of the additional options are available. The full constructor is:

          stored_upper = jac_upper + jac_lower,
          max_hnil_warns = 10,
          max_order = 5,
          max_error_test_failures = 7,
          max_nonlinear_iters = 3,
          max_convergence_failures = 10)

            stored_upper = jac_upper + jac_lower,
            max_hnil_warns = 10,
            max_order = 12,
            max_error_test_failures = 7,
            max_nonlinear_iters = 3,
            max_convergence_failures = 10)

      jac_upper=0,jac_lower=0,stored_upper = jac_upper+jac_lower,
      max_hnil_warns = 10,
      max_error_test_failures = 7,
      max_nonlinear_iters = 3,
      max_convergence_failures = 10,
      predictor_method = 0,
      nonlinear_convergence_coefficient = 0.1,
      dense_order = 3,
      order = 4,
      set_optimal_params = false,
      crdown = 0.3,
      dgmax = 0.2,
      rdiv = 2.3,
      msbp = 20,
      adaptivity_method = 0

See the CVODE manual and the ARKODE manual for details on the additional options.


The ODEInterface algorithms are the classic Fortran algorithms. While the non-stiff algorithms are superseded by the more featured and higher performance Julia implementations from OrdinaryDiffEq.jl, the stiff solvers such as radau are some of the most efficient methods available (but are restricted for use on arrays of Float64).

Note that this setup is not automatically included with DifferentialEquations.jl. To use the following algorithms, you must install and use ODEInterfaceDiffEq.jl:

]add ODEInterfaceDiffEq
using ODEInterfaceDiffEq

Note that while the output only has a linear interpolation, a higher order interpolation is used for intermediate dense output for saveat and for event handling.


This setup provides a wrapper to the algorithm LSODA, a well-known method which uses switching to solve both stiff and non-stiff equations.

Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use LSODA.jl:

]add LSODA
using LSODA


This setup provides access to simplified versions of a few ODE solvers. They mostly exist for experimentation, but offer shorter compile times. They have limitations compared to OrdinaryDiffEq.jl and are not generally faster.

Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use SimpleDiffEq.jl:

]add SimpleDiffEq
using SimpleDiffEq


Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use ODE.jl:

]add ODE
using ODE

†: Does not step to the interval endpoint. This can cause issues with discontinuity detection, and discrete variables need to be updated appropriately.


These algorithms require that the problem was defined using a ParameterizedFunction via the @ode_def macro. Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use MATLABDiffEq.jl:

]add https://github.com/JuliaDiffEq/MATLABDiffEq.jl
using MATLABDiffEq

This requires a licensed MATLAB installation. The available methods are:

For more information on these algorithms, see the MATLAB documentation.


Note: This package currently segfaults on non-Linux Julia v1.0!

GeometricIntegrators.jl is a set of fixed timestep algorithms written in Julia. Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use GeometricIntegratorsDiffEq.jl:

]add https://github.com/JuliaDiffEq/GeometricIntegratorsDiffEq.jl
using GeometricIntegratorsDiffEq

Note that all of these methods require the user supplies dt.


Bridge.jl is a set of fixed timestep algorithms written in Julia. These methods are made and optimized for out-of-place functions on immutable (static vector) types. Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use BridgeDiffEq.jl:

]add https://github.com/JuliaDiffEq/BridgeDiffEq.jl
using BridgeDiffEq


TaylorIntegration.jl is a pure-Julia implementation of an adaptive order Taylor series method for high accuracy integration of ODEs. These methods are optimized when the absolute tolerance is required to be very low. Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use TaylorIntegration.jl:

]add TaylorIntegration
using TaylorIntegration

Note: this method is much faster if you put @taylorize on your derivative function!


QuDiffEq.jl is a pacakge for solving differential equations using quantum algorithm. It makes use of the Yao framework for simulating quantum circuits.

Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use QuDiffEq.jl:

]add https://github.com/QuantumBFS/QuDiffEq.jl
using QuDiffEq


This method trains a neural network using Flux.jl to approximate the solution of the ODE. Currently this method isn't competitive but it is a fun curiosity that will be improved with future integration with Zygote.

Note that this setup is not automatically included with DifferentialEquaitons.jl. To use the following algorithms, you must install and use NeuralNetDiffEq.jl:

]add NeuralNetDiffEq
using NeuralNetDiffEq

List of Supplied Tableaus

A large variety of tableaus have been supplied by default via DiffEqDevTools.jl. The list of tableaus can be found in the developer docs. To use them, note you must install the library:

]add DiffEqDevTools
using DiffEqDevTools

For the most useful and common algorithms, a hand-optimized version is supplied in OrdinaryDiffEq.jl which is recommended for general uses (i.e. use DP5 instead of ExplicitRK with tableau=constructDormandPrince()). However, these serve as a good method for comparing between tableaus and understanding the pros/cons of the methods. Implemented are every published tableau (that I know exists). Note that user-defined tableaus also are accepted. To see how to define a tableau, checkout the premade tableau source code. Tableau docstrings should have appropriate citations (if not, file an issue).

Plot recipes are provided which will plot the stability region for a given tableau.


Koskela, A. (2015). Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method. In Numerical Mathematics and Advanced Applications-ENUMATH 2013 (pp. 345-353). Springer, Cham.