Architecture Overview

FastInterpolations.jl is designed for zero-allocation interpolation on hot paths. This section explains the architecture choices that enable this performance.

Design Goals

  1. Zero-Allocation Hot Paths: No heap allocation after warmup
  2. O(1) Cache Lookup: Lock-free cache hits via RCU pattern
  3. Thread Safety: Safe for concurrent use from multiple threads
  4. Simple API: One function call for both construction and evaluation

Two Usage Patterns

FastInterpolations.jl supports two patterns depending on your use case:

Pattern 1: One-Shot (Recommended for Dynamic Y)

Use when the x-grid is fixed but y-values change over time (e.g., simulation loops).

using FastInterpolations

x = range(0.0, 10.0, 100)
xq = [1.0, 2.0, 3.0]
out = zeros(3)

# Simulated loop - y changes each iteration
for step in 1:3
    y = sin.(x .+ step * 0.1)  # y evolves
    cubic_interp!(out, x, y, xq)  # zero-allocation (after first call)
    println("Step $step: ", round.(out, digits=4))
end
Step 1: [0.8912, 0.8632, 0.0416]
Step 2: [0.932, 0.8085, -0.0584]
Step 3: [0.9636, 0.7457, -0.1577]

Why zero-allocation? The auto-cache stores the LU factorization keyed by the x-grid. On subsequent calls with the same grid, the cached factorization is reused.

Pattern 2: Interpolant (Recommended for Static Data)

Use when both x and y are fixed and you need repeated evaluation.

using FastInterpolations

x = range(0.0, 2π, 100)
y = sin.(x)

# Create interpolant once (pre-computes coefficients)
itp = cubic_interp(x, y)

# Evaluate many times (zero-allocation)
println("itp(0.5) = ", round(itp(0.5), digits=4))
println("itp(1.0) = ", round(itp(1.0), digits=4))
println("itp(π) = ", round(itp(π), digits=4))
itp(0.5) = 0.4794
itp(1.0) = 0.8415
itp(π) = 0.0

Why zero-allocation? The interpolant stores pre-computed spline coefficients. Evaluation only requires local arithmetic.

Choosing the Right Pattern

Comprehensive API Selection Guide

For detailed decision-making support including the SeriesInterpolant pattern, performance trade-offs, and optimization tips, see the API Selection Guide.

Allocation Behavior Summary

MethodScalar QueryVector QueryIn-Place
linear_interpzerooutput onlyzero
cubic_interpzero*output onlyzero*
Interpolant itp(x)zerooutput onlyzero

*zero after first call (auto-cached)

Performance Characteristics

OperationCost
Cache lookup~10 ns (lock-free)
Full interpolation (cache hit)~800 ns (100 points)
Cache miss+LU factorization time

Julia Version Notes

  • Minimum: Julia 1.7+ (for @atomic support)
  • Recommended: Julia 1.12+ (improved atomic semantics, guaranteed zero allocation)
  • Older versions may show minor (~16-64 bytes) allocation overhead in edge cases

See Also