AD Support for N-Dimensional Interpolants
This page covers automatic differentiation for all ND interpolant types (CubicInterpolantND, QuadraticInterpolantND, LinearInterpolantND, ConstantInterpolantND) in 2D, 3D, and higher dimensions.
For 1D interpolants, see Automatic Differentiation Support.
For derivatives with respect to query coordinates (gradient, Hessian, mixed partials), FastInterpolations provides analytical functions that are much faster than AD:
itp(pt; deriv=DerivOp(1, 0)) # partial df/dx, zero-allocation
FastInterpolations.gradient(itp, pt) # full gradient (tuple input → zero-alloc)
FastInterpolations.hessian(itp, pt) # Hessian matrixAD (ForwardDiff, Zygote, Enzyme) is useful for:
- Adjoint computation (
df/dy): differentiating w.r.t. data values, not query points. Native adjoint operators are available for all four interpolant types — see Adjoint Operators. Zygote and Enzyme use them internally via registered AD rules. - Composite pipelines: when the interpolant is embedded inside a larger differentiable function.
For query-coordinate derivatives, the built-in analytical functions are the recommended approach.
Quick Start
# Vector API for AD
itp([0.5, 0.5]) # evaluation
ForwardDiff.gradient(itp, [0.5, 0.5]) # gradient
ForwardDiff.hessian(itp, [0.5, 0.5]) # hessian
# Fast analytical derivatives (much faster; tuple form is zero-allocation)
FastInterpolations.gradient(itp, (0.5, 0.5)) # analytical gradient
FastInterpolations.hessian(itp, (0.5, 0.5)) # analytical hessian2×2 Matrix{Float64}:
-0.420687 -0.420736
-0.420736 -0.420687Supported APIs
Tuple API (Original)
itp((0.5, 0.5)) # evaluation
ForwardDiff.derivative(q -> itp((q, 0.5)), 0.5) # partial derivative0.7701511508373923Vector API (AD-friendly)
itp([0.5, 0.5]) # evaluation
ForwardDiff.gradient(itp, [0.5, 0.5]) # full gradient
ForwardDiff.hessian(itp, [0.5, 0.5]) # hessian matrix2×2 Matrix{Float64}:
-0.420687 -0.420736
-0.420736 -0.420687Performance Comparison
| Method | Allocation | Notes |
|---|---|---|
FastInterpolations.gradient(itp, tuple) | 0 B | Analytical, tuple output |
FastInterpolations.gradient(itp, vector) | small | Analytical, allocates output vector |
FastInterpolations.hessian(itp, x) | minimal | Analytical, symmetric matrix |
deriv keyword | minimal | Analytical, single partial |
| ForwardDiff.gradient | moderate | Dual number propagation |
| ForwardDiff.hessian | moderate | Dual number propagation |
| Zygote (w/ rrule) | moderate | Uses ChainRulesCore |
| Zygote (w/o rrule) | heavy | Source transformation fallback |
Built-in analytical methods are much faster than any AD backend for query-coordinate derivatives.
When ChainRulesCore is loaded (automatically with Zygote), the extension provides a significant speedup for reverse-mode AD by using analytical derivatives internally.
Analytical Gradient & Hessian (Recommended)
FastInterpolations provides built-in gradient and hessian functions that use analytical derivatives internally. These are much faster than ForwardDiff equivalents. The tuple-query form is zero-allocation; the vector-query form allocates the output vector.
gradient and hessian are exported by FastInterpolations, so gradient(itp, x) works when using FastInterpolations alone. However, since many AD packages (ForwardDiff, Zygote) also provide functions with the same name, we use the qualified form FastInterpolations.gradient throughout this guide to avoid ambiguity.
# Gradient: returns (df/dx, df/dy)
grad = FastInterpolations.gradient(itp, (0.5, 0.5)) # Tuple input -> Tuple output
grad = FastInterpolations.gradient(itp, [0.5, 0.5]) # Vector input -> Vector output2-element Vector{Float64}:
0.7701511508373913
-0.22984884644018486# Hessian: returns NxN symmetric matrix
H = FastInterpolations.hessian(itp, (0.5, 0.5))
# H = [d2f/dx2 d2f/dxdy]
# [d2f/dxdy d2f/dy2 ]2×2 Matrix{Float64}:
-0.420687 -0.420736
-0.420736 -0.420687When to Use
| Use Case | Recommended Method |
|---|---|
| Performance-critical code | FastInterpolations.gradient(itp, x), FastInterpolations.hessian(itp, x) |
| Optimization with Optim.jl | FastInterpolations.gradient(itp, x) for custom gradient |
| General AD compatibility | ForwardDiff.gradient |
| Reverse-mode AD (Zygote) | Zygote with ChainRulesCore extension |
3D and Higher Dimensions
Works for any dimension:
z = range(0.0, 1.0, 20)
data3d = [xi + yj + zk for xi in x, yj in y, zk in z]
itp3d = cubic_interp((x, y, z), data3d)
FastInterpolations.gradient(itp3d, (0.5, 0.5, 0.5)) # -> (df/dx, df/dy, df/dz)
FastInterpolations.hessian(itp3d, (0.5, 0.5, 0.5)) # -> 3x3 symmetric matrix3×3 Matrix{Float64}:
1.34724e-14 1.21145e-13 8.22937e-14
1.21145e-13 1.53098e-14 1.84369e-14
8.22937e-14 1.84369e-14 2.95319e-14ForwardDiff Integration
ForwardDiff works seamlessly with both Tuple and Vector APIs:
# Gradient (Vector API)
grad = ForwardDiff.gradient(itp, [0.5, 0.5])
# Hessian
H = ForwardDiff.hessian(itp, [0.5, 0.5])
# Partial derivatives (Tuple API)
df_dx = ForwardDiff.derivative(q -> itp((q, 0.5)), 0.5)
df_dy = ForwardDiff.derivative(q -> itp((0.5, q)), 0.5)Zygote Integration
Zygote uses ChainRulesCore for efficient differentiation. Registered rrules are provided for all four ND interpolant types — eval, gradient, hessian, laplacian, and value_gradient — with both ∂/∂query and ∂/∂data support:
using Zygote
grad = Zygote.gradient(itp, [0.5, 0.5])[1]Without the ChainRulesCore extension, Zygote falls back to source transformation, which is significantly slower and uses much more memory.
Optimization with Optim.jl
Standard Usage (ForwardDiff)
using Optim
# Direct optimization with AD
result = optimize(itp, [0.3, 0.3], LBFGS(); autodiff=:forward)Maximum Performance (Analytical Gradient)
For hot loops or performance-critical code, use the built-in analytical gradient:
# Simple: use FastInterpolations.gradient() directly
function g!(G, x)
grad = FastInterpolations.gradient(itp, x)
G .= grad
end
# Use with Optim.jl
result = optimize(x -> itp(x), g!, [0.3, 0.3], LBFGS())Or for maximum control with the deriv keyword:
function g!(G, x)
pt = (x[1], x[2])
G[1] = itp(pt; deriv=DerivOp(1, 0))
G[2] = itp(pt; deriv=DerivOp(0, 1))
endBoth approaches are much faster than ForwardDiff for gradient computation.
Complex-valued Interpolants
For complex data, ForwardDiff.gradient requires scalar output:
data_c = [sin(xi) + im*cos(yj) for xi in x, yj in y]
itp_c = cubic_interp((x, y), data_c)
# ForwardDiff.gradient(itp_c, [0.5, 0.5]) # fails (gradient expects Real output)
# Option 1: Use jacobian
jac = ForwardDiff.jacobian(
v -> [real(itp_c(v)), imag(itp_c(v))],
[0.5, 0.5]
)2×2 Matrix{Float64}:
0.877583 -6.76308e-16
1.62786e-15 -0.479426# Option 2: Separate real/imag gradients
grad_re = ForwardDiff.gradient(v -> real(itp_c(v)), [0.5, 0.5])
grad_im = ForwardDiff.gradient(v -> imag(itp_c(v)), [0.5, 0.5])3D and Higher Dimensions
All features work for any dimension:
# 3D interpolant (reusing itp3d from above)
itp3d([0.5, 0.5, 0.5])
ForwardDiff.gradient(itp3d, [0.5, 0.5, 0.5]) # Returns [1, 1, 1]3-element Vector{Float64}:
0.9999999999999993
1.0
0.9999999999999907Full Optimization Example
For a complete real-world example using gradient!/hessian! with Optim.jl (including FDM vs AD vs analytical comparison on the Rosenbrock function), see the Optimization with Optim.jl guide.
See Also
- AD Support (1D) — ForwardDiff, Zygote, Enzyme backend details for 1D
- Adjoint via AD (∂f/∂y) — Data sensitivity via AD backends
- Adjoint Operators — Native matrix-free adjoint operators for all types
- Optimization with Optim.jl — Using AD and analytical derivatives with Optim.jl