## Representing integration in the reciprocal spaces of the Fourier and Laplace transforms

There’s some confusion concerning representations of integration in the reciprocal spaces of the Fourier and Laplace transforms in some entries of MathOverflow that arises from not distinguishing among integration operators with different limits of integration (combined with some handwaving about group characters). Misassociation, or conflation, of distinct integration ops has been a source of historical confusion (cf. [Threefold Interpretation of Fractional Derivatives][1] by R. Hilfer as well as the comments in [MO question][2]). The convolution theorems provide a way to view differing integrations to effectively translate them into relatively simple factors in the reciprocal spaces.

Typical well-defined integration ops do not commute with the derivative nor the translation ($T$) ops; i.e., integration ops are not generally translation invariant (only in particular cases). For example, for some arbitrary constant $c$,

$\displaystyle D^{-1}_{x,c}\exp(x\;y)=\int_{c }^{x}\exp(u\;y)du=\frac{\exp(x\;y)-\exp(c\;y)}{y},$

and (with the commutator $[A,B] = A \; B - B \; A$)

$[D_x,D^{-1}_{x,c}]\exp(x\;y)=\exp(c\;y)$ and

$[T_{{x\rightarrow x+h}},D^{-1}_{x,c}]\exp(x\;y)=\exp(c\;y)[\exp(h\;y)-1]/y$.

Let’s look at some particular values for the lower limit of integration:

$\displaystyle D^{-1}_{x,-\infty}\exp(x\;y)=\frac{\exp(x\;y)}{y}$ only if $y >0,$

and

$\displaystyle D^{-1}_{x,0}\exp(x\;y)=\frac{\exp(x\;y)-1}{y}.$

These integration operators can not be naively interchanged with the integral of the integral transform and applied to the exponential (group character) in the integrand to give a sensible result; i.e., they do not naively commute as division by $2 \pi i f$, for the inverse Fourier transform

$\displaystyle g(x)=FT^{-1}_{f\rightarrow x} \; \tilde g_{FT}(f)=\int_{-\infty }^{\infty}\tilde g_{FT}(f)\exp(2 \pi i fx)df$

nor division by $p$ for the inverse Laplace transform

$\displaystyle H(x)g(x) =LP^{-1}_{p\rightarrow x} \; \tilde g_{LT}(p)=H(x) \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \tilde g_{LT}(p)\; e^{x\;p} dp,$

where the Heaviside step function $H(x)$ is explicitly introduced.

$\displaystyle FT_{x\rightarrow f}H(x)=\frac{1}{2 \pi i f}+\frac{\delta(f)}{2}$ (the Cauchy principle value, C.P.V., applies) and $\displaystyle LP_{x\rightarrow p}H(x)=\frac{1}{p},$

for the appropriate pairing of transforms and integration ops, the convolution theorems guide us to the correct results

$\displaystyle D^{-1}_{x,-\infty}\;g(x)=\int_{-\infty }^{x}g(u)du=\int_{-\infty }^{\infty}H(x-u)g(u)du=FT^{-1}_{f\rightarrow x} \;[\frac{1}{2 \pi i f}+\frac{\delta(f)}{2}]\; \tilde g_{FT}(f)$

and

$\displaystyle H(x)D^{-1}_{x,0}\;g(x)=H(x)\int_{0}^{x}g(u)du=\int_{-\infty}^{\infty}H(x-u)H(u)g(u)du=LP^{-1}_{p\rightarrow x} \; \frac{1}{p}\tilde g_{LT}(p)$,

so

$\displaystyle FT_{x\rightarrow f}D^{-1}_{x,-\infty}\;g(x)=[\frac{1}{2 \pi i f}+\frac{\delta(f)}{2}]\; \tilde g_{FT}(f)$

and

$LP_{x\rightarrow p}H(x)D^{-1}_{x,0}\;g(x)=\frac{1}{p} \; \tilde g_{LT}(p).$

The convolution theorems are basically a result of the factoring $e^{a+b}=e^a e^b$, and the Fourier transform of $H(x)$ can be intuited from the Laplace transform result $\frac{1}{p}=\frac{1}{Re(p)+iIm(p)}$ through this [MSE-Q][3].

These convolution integrals are translation invariant in the sense that

$\displaystyle \int_{-\infty }^{\infty}H(x+h-u)g(u)du=\int_{-\infty }^{\infty}H(x-u)g(u+h)du$

and

$\displaystyle \int_{-\infty}^{\infty}H(x+h-u)H(u)g(u)du=\int_{-\infty}^{\infty}H(x-u)H(u+h)g(u+h)du.$

Check: As a numerical check for the FT formula, using $\displaystyle FT(\frac{sin(\pi \; x)}{\pi \; x})=rect(f)$ gives

$\displaystyle \int_{-\infty }^{x}\frac{sin(\pi \; u)}{\pi \; u}du=\int_{-1/2 }^{1/2} [\frac{1}{2 \pi i f}+\frac{\delta(f)}{2}]\exp(2 \pi i fx)df=\frac{1}{2}+\frac{1}{2}\int_{-1 }^{1}\frac{sin(\pi f\;x)}{\pi f}df$.

I also have trouble with the liberal use of the word diagonalization for the integration op. for the FT since this suggests that higher order integrations can be obtained by simply exponentiating the factor containing the Dirac delta, but this gives nonsense.

Instead, for a second integration for the example,

$\displaystyle \int_{-\infty }^{x}\int_{-\infty }^{u}\frac{sin(\pi \; w)}{\pi \; w}dwdu=\int_{-\infty }^{x}(x-u)\frac{sin(\pi \; u)}{\pi \; u}du$

$\displaystyle =F.P.\int_{-1/2 }^{1/2} [\frac{1}{(2 \pi i f)^2}+\frac{-\delta'(f)}{2(2 \pi i)}]\exp(2 \pi i fx)df=\frac{x}{2}+\int_{-1/2 }^{1/2} [\frac{1-cos(2 \pi fx)}{(2 \pi f)^2}]df$,

where $F.P.$ denotes the Hadamard finite part.

However, if we use the formal trick,

$\displaystyle \frac{\delta(f)}{2 \pi i f}=H(f)\frac{f^{-1}}{(-1)!}\frac{1}{2 \pi if}=H(f)\frac{f^{-2}}{(-2)!}\frac{-1}{2 \pi i} = \frac{-\delta'(f)}{2 \pi i},$ we have at least a mnemonic employing division by $\displaystyle \frac{1}{2 \pi if}$.

Summary: (Cf. example above and [here][4])

For the Laplace transform paired with the translation invariant definite integral $H(x)D^{-1}_{x,0}$, diagonalization is achieved, and the derivative and this integral op do commute when acting on $H(x) x^n$; i.e.,

$D_x H(x)D^{-1}_{x,0} H(x) x^n = D_x H(x) \frac{x^{n+1}}{n+1} = \delta(x) \; \frac{x^{n+1}}{n+1} + H(x) x^n = H(x) x^n$

and

$H(x)D^{-1}_{x,0}D_x H(x) x^n = H(x)D^{-1}_{x,0} [x^n \delta(x) + H(x) \;n \; x^{n-1}] = H(x) x^n$.

For the FT, regularization (C.P.V., Hadamard finite part integrals, as well as Dirac deltas and their derivatives) must be introduced whether translation invariance is present, as for $D^{-1}_{x,-\infty}$, or not, as for $D^{-1}_{x,0}$ in the examples in [MO-Q][4] that more closely resemble “diagonalization” (note that the upper set of integrals in my answer to that entry are shorthand for finite part integrals defined in the sequel of that entry and that naive, simple division in the integrand would result in a divergence).

You’d expect some type of division by $2 \pi if$ to at least partially represent the action for the various integration ops in Fourier space since each op represents a “raising” op for ladders/sequences of functions comprised of consecutive integrations of a function, sequences whose “lowering” op is the derivative, represented by multiplication by $2 \pi if$ in the Fourier space. Singularities introduced by this division have to be tempered by some regularization of the inverse FT, such as the Cauchy $P.V.$ or the Hadamard $F.P.$ The Dirac delta and its derivatives arise from differences in the range of integration for the integration ops. and in repeated application of the ops. The formal application of the convolution theorem to repeated convolutions, when applicable, can provide guidance in the selection of the Dirac delta and its derivatives.

One more technical detail: In a sense, there are two Dirac delta functions being used here–one for the Laplace transform for which

$\displaystyle \int^{\infty}_{0} \delta_{LP}(x) e^{-px} \; dx = 1$  ,

and one for the Fourier transform for which

$\displaystyle \int^{\infty}_{-\infty} H(x) \delta_{FT}(x) e^{-2\pi i fx} \; dx = H(0)= 1/2 = \int^{\infty}_{0} \delta_{FT}(x) e^{-2 \pi i fx} \; dx$

to give consistent limits for this Fourier transform acting on the delta function as a limit of a sinc function, i.e., the limit as $L$ tends to infinity of $\displaystyle L \; sinc(Lx) = \frac{sin(\pi Lx)}{\pi x} = \int^{L/2}_{-L/2} e^{2 \pi i fx} \; df$ . The convolution theorem for the FT of the product of the Heaviside function and the sinc function gives

$\displaystyle \int^{L/2}_{-L/2} \frac{\delta(f-\hat{f})}{2} + \frac{1}{2\pi i(f-\hat{f})} \; d\hat{f} \; |_{f=0} = 1/2$ .

This is also consistent with complex contour integral reps in which the contour intersects the pole identified with the delta function. The delta function and other distributions are to be identified with and defined by action on integrals in some limit.

The MathOverflow question whose answers puzzled me and compelled me to put down these considerations in detail is http://mathoverflow.net/questions/2809/intuition-for-integral-transforms. Make your own judgments about which viewpoints are more constructive if you are motivated to actually work out some numerical results. (A bit of wisdom by an advocate of evolutionary psychology: If you want to really understand what someone is saying you need to know with whom he is arguing. True all too often in mathematics as well.)