Adjoint Operators

What Is an Adjoint?

For fixed grid x, query points xq, and boundary conditions, spline interpolation is an affine operation on data values f:

\[\mathbf{y} = W \, \mathbf{f} + \mathbf{c}\]

where $W$ is the interpolation weight matrix of size $(m \times n)$$m$ query points, $n$ grid points — and $\mathbf{c}$ is a constant offset determined by the boundary condition values.

Matrix-free implementation

\[W\]

is a mathematical abstraction — FastInterpolations never forms this matrix explicitly. Both $W \mathbf{f}$ (forward) and $W^\top \bar{\mathbf{y}}$ (adjoint) are computed via matrix-free algorithms that exploit each spline's structure.

The adjoint (transpose) operator maps query-space sensitivities back to data-space:

\[\bar{\mathbf{f}} = W^\top \bar{\mathbf{y}}\]

DirectionOperationDescription
Forward (gather)$\mathbf{y} = W \mathbf{f} + \mathbf{c}$Weighted sum of nearby data + BC offset → interpolated values
Adjoint (scatter)$\bar{\mathbf{f}} = W^\top \bar{\mathbf{y}}$Distribute query-space sensitivities back to grid nodes
Adjoint ≠ Inverse

The adjoint $W^\top$ is the Jacobian transpose, not the inverse. It maps sensitivities (cotangent vectors) from query-space back to data-space, which is exactly what reverse-mode AD computes for the pullback $\partial L / \partial \mathbf{f}$.

Why Use Adjoints?

The adjoint arises naturally in any workflow that needs gradients with respect to data values $\partial L / \partial \mathbf{f}$.

ApplicationHow the Adjoint Appears
Inverse problemsFit grid data $f$ by minimizing $\lVert Wf + c - y_\text{obs} \rVert^2$. Gradient: $\nabla_f L = 2 W^\top (Wf + c - y_\text{obs})$.
PDE-constrained optimizationPropagate sensitivities through interpolation steps without forming the full Jacobian.
Neural network layersBackpropagation through a spline interpolation layer requires the adjoint.
Data assimilationMap observation-space increments back to state-space corrections (4D-Var).

Computing the Adjoint

Automatic Differentiation

Pass f as a live variable through the one-shot API and let an AD backend differentiate. See Adjoint via AD for details and backend compatibility.

using Zygote
∇f = Zygote.gradient(f -> sum(cubic_interp(x, f, xq)), f)[1]
# Works with all types: linear_interp, quadratic_interp, constant_interp

Native Adjoint Operator

FastInterpolations provides matrix-free adjoint operators for every interpolant type. Construct once from grid + queries, then apply to any $\bar{\mathbf{y}}$:

adj = cubic_adjoint(x, xq; bc=CubicFit())
f̄ = adj(ȳ)                  # allocating
adj(f̄, ȳ)                   # in-place, zero-allocation

Zygote and Enzyme use these native operators internally via registered AD rules, so you get native performance through the standard AD interface.

See Adjoint 1D and Adjoint ND for the full API.


Supported Methods

All four interpolant types have full native adjoint support in both 1D and ND:

MethodNative AdjointAD (ForwardDiff / Zygote / Enzyme)
ConstantConstantAdjoint / ConstantAdjointND
LinearLinearAdjoint / LinearAdjointND
QuadraticQuadraticAdjoint / QuadraticAdjointND
CubicCubicAdjoint / CubicAdjointND
Enzyme compatibility

Enzyme support is fully tested on Julia ≥ 1.10 with standard platforms. On some OS/architecture combinations or older Julia versions, Enzyme may encounter edge cases. See Adjoint via AD for details.

See Also