Interpolate#
Tedious Construction of Polynomial Splines#
A polynomial spline is a continuously defined function made of polynomial pieces of nonnegative integer degree \(N\). What makes such a spline special is that all pieces connect smoothly when \(N\geq1\). In one dimension, let some piece be defined over \(x_{-1}\leq x\leq x_{0}\) by the polynomial
and let an adjacent piece be defined over \(x_{0}\leq x\leq x_{1}\) as
Then, to be a legitimate part of a polynomial spline, the two pieces \(a\) and \(b\) must join continuously at \(x_{0}\). The first derivatives must also be in agreement; likewise, the next derivatives are in agreement, too, up to the \(\left(N-1\right)\) th derivative. (However, the derivatives of order \(N\) of \(a\) and \(b\) are allowed to disagree at \(x_{0}\).)
In one dimension, each polynomial piece has a left neighbor and a right neighbor. With many pieces and a large polynomial degree, the explicit construction of a spline would make for a tedious task that must honor many constraints of continuity. In the interpolation context, one additionally asks that the spline, which we now call \(f\), reproduces the list \(\{y[k]\}_{k=0}^{K-1}\) of \(K\) samples at the set \(\{x[k]\}_{k=0}^{K-1}\) of \(K\) sampling locations, with
Practical Construction of Polynomial Splines#
The construction of a polynomial spline is greatly simplified under some appropriate assumptions, namely, that the sampling locations are regularly spaced exactly one unit apart and coincide with the integers; it is also especially convenient to assume that the polynomial pieces are of unit length, too, and that their extremities coincide either with the integers (odd \(N\)) or with the half integers (even \(N\)). Under these simplifying assumptions, algorithms were championed in [1], [2], [3]. to honor all constraints (continuity and interpolation), with the computational effort \({\mathcal{O}}(K\,\left\lfloor N/2\right\rfloor)\) to build a polynomial spline of degree \(N\) out of \(K\) samples.
Once the spline is built, it can be interrogated at any continuous argument \(x\in{\mathbb{R}}\) to yield the value \(f(x)\), with the per-point computational effort \({\mathcal{O}}(N^{2})\).
In the SplineOps implementation, the polynomial pieces are never expressed explicitly. Instead, so-called spline coefficients \(c\) are used to represent the spline. Moreover, the polynomial pieces that make the spline are decomposed in a basis of integer translates of B-splines, which are functions \(\beta^{N}\) that are uniquely defined as even-symmetric splines of unit integral and least support. There, \(N\) is a superscript (not an exponent) that gives the degree of the B-spline.
Tensor Product of Splines#
In one dimension, we write a generic spline in terms of \(x\in{\mathbb{R}}\) as
In multiple dimensions, we consider tensor-product splines. For instance, a continuously defined grayscale spline image that interpolates the array \(y[k_{1},k_{2}]\) of width \(W\) and height \(H\) writes
and satisfies that \(f(k_{1},k_{2})=y[k_{1},k_{2}]\). Pay attention that \((x_{1},x_{2})\in{\mathbb{R}}^{2}\) is a continuously defined two-component vector, while \(k_{1}\in[0\ldots W-1]\) and \(k_{2}\in[0\ldots H-1]\) are integers. Similarly, a spline volume would write
Extension of Splines Beyond Their Known Support#
It is remarkable that splines can be interrogated at any coordinate, not only at a non-integer one, but also at one that would be far remote from the support of the samples that define, say, an image. This is achieved by imposing some structural organization to the spline coefficients. Typically, this organization takes some form of periodicity, possibly combined with local reversals of chunks of samples. It allows us to predict the value of a spline coefficient \(c[{\mathbf{k}}]\) for any \({\mathbf{k}}\in{\mathbb{Z}}^{2}\) from the sole knowledge of \(c[{\mathbf{k}}]\) for \(k_{1}\in[0\ldots W-1]\) and \(k_{2}\in[0\ldots H-1]\).
Quality of Approximation#
The computational burden increases with the degree of a spline, but so does the quality of the representation. Indeed, consider a one-dimensional ground-truth generic function (not necessarily a spline) that is differentiable sufficiently many times. Now, it is customary to know this function only through its samples and to let a spline of degree \(N\) interpolate them. The higher the degree, the closer to the local Taylor expansion of the ground truth the spline gets the chance to be because \(N+1\) terms can potentially match. This argument is mere hand-waiving; its merit is that it relies on no more than calculus to make plausible that the continuously defined spline represents the continuously defined ground truth better when the degree rises.
To understand the true mechanism that links degree and quality, one needs to master the theory of approximation, which in turn relies on the Fourier theory, the theory of distributions, the measure theory, the theory of sampling, and the theory of integration, among others, all of them being advanced mathematical topics [4], [5].
While one-dimensional splines are well-understood, the approximation properties of tensor-product splines are less so. Yet, numerous practical experiments have led to the conclusion that a tensor-product cubic spline offers a good tradeoff between computational effort and quality, being significantly more accurate than what linear interpolation offers.