Newton Interpolation and the Derivative in Finite Differences

Relations between the normalized Mellin transform (MT) and Newton interpolation (NI) can shed some light on the validity of a finite difference formula for the derivative alluded to in the MathOverflow question MO-Q: Derivative in terms of finite differences.

From formal symbolic calculus, the forward finite difference is defined by

\displaystyle \triangle_x f(x) = f(x+1) - f(x) = (e^{D_x}-1)f(x) \;,

so inverting gives

\displaystyle D_x = \log(1 + \triangle_x) = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{\triangle_x^n}{n}.

An equivalent formula can be derived (using slightly different notation) from NI of suitable functions:

\displaystyle \triangle^{s}_{n} \triangle^{n}_{j} f(x+j)=f(x+s)

with \displaystyle \triangle^s_{n}c_n=\sum_{n=0}^{\infty}(-1)^n \binom{s}{n} \; c_n.

Then by formally interchanging differentiation and summation,

\displaystyle \frac{d}{ds} \; \triangle^{s}_{n} \triangle^{n}_{j} f(x+j)|_{s=0}= \sum_{n=1}^{\infty}\frac{-1}{n} \; \triangle^{n}_{j} f(x+j)= D_x f(x).

The NI, in turn, can be related to the interpolation of discretely sampled MTs for certain classes of functions.

One way is through Ramanujan’s Master Formula (cf. MO-Q: What does Mellin inversion really mean? ). Using umbral notation (MSE-Q: What’s umbral calculus about?) for the Taylor series, h(t)=e^{-a.t}, the MT formally gives

\displaystyle g(s)= \int_{0}^{\infty} h(t) \frac{t^{s-1}}{(s-1)!} \; dt = \int_{0}^{\infty} e^{-a.t} \frac{t^{s-1}}{(s-1)!} \; dt

\displaystyle = \int_{0}^{\infty} e^{-t} e^{(1-a.)t} \frac{t^{s-1}}{(s-1)!} \; dt = \sum_{m=0}^{\infty} (-1)^m \binom{-s}{m}\sum_{k=0}^m(-1)^k \binom{m}{k} a_k \; ,

using \binom{s+m-1}{m}=(-1)^m \binom{-s}{m} for the MT term-by-term of the Taylor series for e^{(1-a.)t}.

Then h(t)=e^{-a.t} with  a_n=g(-n) since, for s=-n (n=0,1,2,...) , the binomial transform here is an involution and the regularization of the MT gives the Dirac delta function or its derivatives in the integrand, i.e., \frac{t^{-n-1}}{(-n-1)!} is replaced by \delta^{(n)}(t). (Both sides of the equation could converge simultaneously only over restricted values of s, but one side or the other can typically be considered the analytic continuation of the other.) Note how suggestive the umbral notation is with

\displaystyle \int_0^\infty \; e^{-a.t} \; \frac{t^{s-1}}{(s-1)!} \; dt = a.^{-s} = a_{-s}.

So, we have

\displaystyle g(-s)= \sum_{m=0}^{\infty} (-1)^m \binom{s}{m}\sum_{k=0}^m(-1)^k \binom mk g(-k)= \triangle^{s}_{m} \triangle^{m}_{k} g(-k) \; ,

and you can regard the MT here as the interpolation of the coefficients of the Taylor series with NI as one of its avatars, or the NI as the interpolation of a discretely sampled, analytically continued MT.

Now identify f(x+j)=g(-j) and f(x+s)=g(-s) to obtain

\displaystyle f(x+s) = \triangle^{s}_{n} \triangle^{n}_{j} f(x+j).

The inverse MT gives, for appropriate \sigma when convergent,

\displaystyle \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{\pi}{\sin(\pi s)} \; g(s) \; \frac{t^{-s}}{(-s)!} \; ds = \sum_{n=0}^{\infty} g(-n) \; \frac{(-t)^{n}}{n!} = h(t) \; .

An alternate derivation and additional examples can be found in MSE-Q: Explicitly reconstructing a function from its moments.

For a concrete illustration of this perspective on NI and the derivative formula, consider

g(s)= (x-s)^p,

where p is real. For x > Real(s)=\sigma >0, the inverse Mellin transform and the Euler transformation e^{-a.t} = e^{-t}e^{(1-a.)t} gives

\displaystyle h(t) = \sum_{j=0}^{\infty}(-1)^j \; (x+j)^p \; \frac{t^j}{j!} = e^{-t} \sum_{j=0}^{\infty}[\triangle^j_k (x+k)^p] \; \frac{t^j}{j!} .

(For p a natural number n, this is umbrally h(t)=(x+\phi.(-t))^n e^{-t}, implied by the generalized Dobinski formula (cf. Ordinary generating function for Bell numbers ), where \phi_k(t) are the Bell / Touchard / Exponential polynomials.)

The Mellin transform with an interchange of integration and summation gives the NI:

\displaystyle (x-s)^p = \int_{0}^{\infty} [ e^{-t} \; \sum_{j=0}^{\infty}[\triangle^j_k (x+k)^p] \; \frac{t^j}{j!} ] \; \frac{t^{s-1}}{(s-1)!} \; dt

\displaystyle = \sum_{j=0}^{\infty} \; [\int_{0}^{\infty} e^{-t} \; \frac{t^j}{j!} \; \frac{t^{s-1}}{(s-1)!} \; dt] \; \triangle^j_k (x+k)^p = \sum_{j=0}^{\infty}\; \binom{s+j-1}{j} \; \triangle^j_k (x+k)^p

\displaystyle = \sum_{j=0}^{\infty} (-1)^j \; \binom{-s}{j} \triangle^j_k (x+k)^p = \triangle_j^{-s} \triangle_k^j (x+k)^p.

Summarizing, with a change of variables (sign),

\displaystyle (x+s)^p = \triangle_j^{s} \triangle_k^j (x+k)^p,

and, with an interchange of differentiation and summation,

\displaystyle D_{s=0}(x+s)^p = p \; x^{p-1} = D_x x^p = D_{s=0} \triangle_j^s \triangle_k^j (x+k)^p = \sum_{j=1}^{\infty} \; \frac{-1}{j} \; \triangle^j_k \; (x+k)^p.

The analysis above suggests that for x \le 0 for p not a positive integer, even for p = -1, the derivative formula would not converge. The finite differences of higher order than the degree of a polynomial they are acting on vanish, so convergence is no issue for polynomials.

This entry was posted in Math and tagged , , , , , , . Bookmark the permalink.

One Response to Newton Interpolation and the Derivative in Finite Differences

  1. Tom Copeland says:

    The formula for higher order derivatives in terms of finite differences is given in Abramowitz and Stegun’s compilation. Pincherle discusses the same for fractional derivatives on pg. 376 of “Memoire sur le calcul fonctionnelle distributif “.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s