Jumpin’ Riemann!…..!..!.!.Mangoldt–da mon–got it!….!..!

The magic of Mangoldt summoning Riemann’s miraculous miniscules-the nontrivial zeros.

In response to observations initiated by Matt McIrvin of a sum of exponentials of the imaginary part of the non-trivial zeroes of the Riemann zeta function, assuming the Riemann hypothesis is true, as presented on a stream through Mathstackexchange (MSE), Mathoverflow (MO), and the n-Category Cafe. One thread is the MO-Q Quasicrystals and the Riemann Hypothesis posed by John Baez.

The main actors are the Riemann zeta function \zeta(s), the Landau Xi function \xi_L(s) (aka, the Riemann Xi function with the two poles removed), the von Mangoldt function \Lambda(n), the Chebyshev function \psi(x) (aka, the von Mangoldt summatory function), and the Riemann jump function J(x) (aka, the Riemann prime number counting function) with Mellin, Heaviside, and Dirac directing, with a cameo by Fourier.

We formally rederive a relationship beween the zeroes of the Riemann zeta function and the powers of the primes (PP henceforth will be our acronym for powers of the primes or prime powers, the p^n, where p is a prime and n = 1,2,3, ..) that manifests itself in spikes observed at locations of the imaginary part of the nontrivial zeroes–Dirac delta functions arising from taking a Fourier transform of the derivative of a function that sums over the log of PP, the Mangoldt summatory function, aka, the Chebyshev staircase function, aka, the morphed Riemann jump function for counting the primes.

In fact, the Riemann jump function J and the Chebyshev \psi staircase function are two sides of the same coin. The Riemann J is a sum of Heaviside step functions, H(x), with the edges of the steps located at the powers of the primes (PP) while the Chebyshev \psi gives the same but with edges located at the log of the PP with different step lifts. One can simply be rewritten into the other with a change of parameters. Consequently, taking the derivative of the functions gives us Dirac delta functions located at PP or, alternatively, their logs. And both are directly related to an inverse Mellin transform of the logarithmic derivative of the Riemann zeta.


Basic Algorithm: Logging product formulas for polynomials


Step I: Factor the polynomial or rational function,

F(x) = (x-2)(x-3)\frac{1}{(x-1)}

Step II: Take the logarithmic derivative,

D_x \ln(F(x)) = D_x [\ln(x-2)+\ln(x-3)-\ln(x-1)]=\frac{1}{x-2}+\frac{1}{x-3}-\frac{1}{x-1}.

Step III: Take the inverse Mellin transform:

for \sigma < 1, i.e., putting the line of integration to the left of all the zeros, and closing the contour counter-clockwise to the right,

\frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [\frac{1}{x-2}+\frac{1}{x-3}-\frac{1}{x-1}] x^{-s} ds= -H(x-1)[x^2+x^3-x^1] .

Step IV: Substitute e^{iu} for x for u > 0 obtaining,

-H(u) [e^{i2u}+e^{i3u}-e^{i1u}]

Step V: Extend the result as an even function to u <0,

\frac{-1}{2} [[e^{-i2u}+e^{-i3u}-e^{-i1u}] + [e^{i2u}+e^{i3u}-e^{-1u}]]= -[cos(2u)+cos(3u)-cos(1u)]

Step VI: Take the Fourier transform to obtain Dirac delta functions at the zeros and the pole of F(x) and their negatives,

\int_{-\infty}^{\infty} \frac{-1}{2}[e^{-i2u}+e^{-i3u}-e^{-i1u} + e^{i2u}+e^{i3u}-e^{i1u}] e^{-i 2 \pi \omega u} du

= \frac{-1}{2}[\delta(\omega+2)+\delta(\omega+3)-\delta(\omega+1) + \delta(\omega-2)+\delta(\omega-3)-\delta(\omega-1)]

The basic algorithm is to take logarithmic derivatives of product formulas for a “polynomial” to get a sum of simple poles at the zeroes z_k, or at some other parameters characterizing the “polynomial”, and then applying the linear inverse Mellin transform to turn these poles 1/(s-z_k) into a sum of exponentiated terms x^{z_k}. Making a simple change of variable gives us a sum of exponentials of the zeroes e^{uz_k}, or the other parameters–tantamount to taking an inverse Laplace transform of the simple poles.

Our “polynomials” are the Riemann zeta, expressed by the Euler product formula for zeta in terms of the PP , and its equivalent the Landau Xi function, expressed by the Hadamard product formula in terms of its nontrivial zeros, which are the same as zeta’s. The Landau Xi function is the Riemann zeta function with its pole and trivial zeroes removed by multiplying by some fairly simple factors–the most complicated being a gamma function whose singularities remove the simple zeroes (remove them initially but reintroduce them in the subsequent analysis through the poles of the digamma function–the logarithmic derivative of the gamma function.)

In the final analysis we have a relation among the locations of PP through the derivative of the Riemann jump function; the locations of the log of the powers of the primes through the derivative of the Chebyshev function, both derivations related to the inverse Mellin transform of the logarithmic derivative of Euler’s product formula; and the locations of the non-trivial zeros of the Riemann zeta function through the inverse Mellin transform of the logarithmic derivative of Hadamard’s product formula for the Landau Xi, in terms of the nontrivial zeros.


Detailed Formal Analysis


Let’s work through the analysis in more detail.

With H(x) the Heaviside step function and \rho designating a prime,

J(x) = \sum_{n>0} \sum_{\rho} \frac{1}{n} H(x-\rho^n)

= \sum_{m>1} \frac{\Lambda(m)}{\ln(m)} H(x-m),

where n,m are the natural numbers, since

\Lambda(m) = \ln(\rho) for m= \rho^k with k a natural number and vanishes otherwise. Then employing the Dirac delta function

\frac{dJ(x)}{dx} = \sum_{n>1} \sum_{\rho} \frac{1}{n} \delta(x-\rho^n)

= \sum_{m>1} \frac{\Lambda(m)}{\ln(m)} \delta(x-m)

= \sum_{m>1} \frac{\Lambda(m)}{\ln(x)} \delta(x-m),

so

\ln(x) \frac{dJ(x)}{dx} = \sum_{m>1} \Lambda(m) \delta(x-m)= \frac{d\psi(x)}{dx},

where \psi(x)= \sum_{m>1} \Lambda(m) H(x-m).

Now look at a property of the Mellin transform. If

F(s)= \int_0^{\infty}f(x) x^{s-1}dx,

then for suitably behaved functions

F'(s)= \int_0^{\infty}\ln(x)f(x) x^{s-1}dx,

and, from the earlier post on the Riemann jump function,

\ln[\zeta(1-s)] =\int_0^{\infty} \frac{dJ(x)}{dx} x^{s-1}dx,

implying, for Re(s)< 0,

\frac{d\ln[\zeta(1-s)]}{ds}= -\frac{\zeta'(1-s)}{\zeta(1-s)} = \int_0^{\infty} \ln(x) \frac{dJ(x)}{dx} x^{s-1}dx

= \int_0^{\infty} \sum_{m>1} \Lambda(m) \delta(x-m) x^{s-1}dx = \int_0^{\infty} \frac{d\psi(x)}{dx} x^{s-1}dx

= \sum_{m>1} \Lambda(m) m^{s-1}= \sum_{m>1} \Lambda(m) m^{-1} \exp[s \ln(m)] .

Therefore,

\frac{d\psi(x)}{dx}= \ln(x) \frac{dJ(x)}{dx}= \sum_{m>1} \Lambda(m) \delta(x-m) .

= -\frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{\zeta'(1-s)}{\zeta(1-s)} x^{-s} ds.

Now make the change of variable x = e^u, giving

\sum_{m>1} \Lambda(m) \delta(e^u-m) = \sum_{m>1} \frac{\Lambda(m)}{m} \delta(u-\ln(m)) = -\frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{\zeta'(1-s)}{\zeta(1-s)} e^{-us} ds,

which agrees with the rep above of the zeta fct. ratio and the inverse Laplace transform rep of a delta fct.

The last linear contour integral is essentially an inverse Laplace transform. At this point, it would be good to do a sanity check by numerically evaluating the line integral over finite limits by replacing the infinities by L/2, some finite extent. This will give sinc functions for the delta functions. But, let’s continue.

To get some relation to the non-trivial zeros, multiply the top and bottom of the zeta function ratio to convert the denominator to Landau’s \xi_L(s), an entire function symmetric about Re(s)=1/2, with no trivial zeros, and the same non–trivial zeros as the Riemann zeta :

\frac{\zeta'(1-s)}{\zeta(1-s)} = \frac{ \frac{1}{2} s (s-1) \pi^{\frac{s-1}{2}} (-\tfrac{s+1}{2})!}{ \frac{1}{2} s (s-1) \pi^{\frac{s-1}{2}} (-\tfrac{s+1}{2})!} \frac{\zeta'(1-s)}{\zeta(1-s)}= \frac{ \frac{1}{2} s (s-1) \pi^{\frac{s-1}{2}} (-\tfrac{s+1}{2})! \zeta'(1-s)}{\xi_L(1-s)} = \frac{\frac{1}{2} s (s-1) \pi^{\frac{s-1}{2}} (-\tfrac{s+1}{2})! \zeta'(1-s)}{\xi_L(s)}

= \frac{-s \frac{-s+1}{2} \pi^{\frac{s-1}{2}} (-\tfrac{s+1}{2})! \zeta'(1-s)}{\xi_L(s)} = \frac{-s \pi^{\frac{s-1}{2}} (\tfrac{-s+1}{2})! \zeta'(1-s)}{\xi_L(s)} .

Let’s see if we can tease out a \frac{d}{dx}\ln[\xi_L(s)] by taking the derivative of \xi_L(s) and comparing it to the numerator of our ratio. We can then make use of the Hadamard product representation of this log to relate this to pairs of the non-trivial zeros.

Differentiating term by term and using the digamma or psi function di(s+1)= \frac{d\ln(s!)}{ds} (due to a conflict of notation with the Chebyshev function, I shall use di(s) for the digamma function):

\frac{d \xi_L(s)}{ds}= \frac{d}{ds} [-s \pi^{\frac{s-1}{2}} (\tfrac{-s+1}{2})! \zeta(1-s)]

= [\frac{1}{s} + \frac{\ln(\pi)}{2} - \frac{di(1+ \tfrac{1-s}{2})}{2}] \xi_L(s) + s \pi^{\frac{s-1}{2}} (\tfrac{-s+1}{2})! \zeta'(1-s),

so

s \pi^{\frac{s-1}{2}} (\tfrac{-s+1}{2})! \zeta'(1-s) = \xi_L'(s) - [\frac{1}{s} + \frac{\ln(\pi)}{2} - \frac{di(1+ \tfrac{1-s}{2})}{2}] \xi_L(s),

and our ratio becomes

-\frac{\zeta'(1-s)}{\zeta(1-s)} = - [\frac{1}{s} + \frac{\ln(\pi)}{2} - \frac{di(1+\frac{1-s}{2})}{2}] + \frac{d\ln[\xi_L(s)]}{ds}.

The Hadamard product formula gives

\ln[\xi_L(s)] = -\ln(2) + \sum_{i>0} [\ln[1-\frac{s}{z_i}]+ \ln[1-\frac{s}{\bar{z}_i}]],

where the sum is over the zeros above the real axis z_i and the lower zeroes are entered through taking the complex conjugate.

Taking the derivative,

\frac{d}{ds}\ln(\xi_L(s)) = \sum_i [\frac{1}{s-z_i}+\frac{1}{s-\bar{z}_i}]

Now appealing to Mangoldt’s formula, formally derived in Appendix II from the formula’s above, we have for \sigma = 1 - \omega < 0,

\psi(x)= \frac{1}{2 \pi i} \int_{\omega - i \infty}^{\omega + \infty} -\frac{\zeta'(s)}{\zeta(s)} \frac{x^s}{s} ds = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} -\frac{\zeta'(1-s)}{\zeta(1-s)} \frac{x^{-s+1}}{-s+1} ds

= H(x-1) [x - \sum_{z=z_i,\bar{z}_i} \frac{x^{z}}{z} - \frac{1}{2}\ln[1-x^{-2}]- \ln(2 \pi)].

Taking the derivative and relating this to our other expression above,

\frac{d\psi(x)}{dx}= \sum_{m>1} \Lambda(m) \delta(x-m)

= H(x-1) [1 - \sum_{z=z_i,\bar{z}_i} x^{z-1} + \frac{1}{x} + \frac{x}{1-x^2}] - \delta(x-1) \ln(2 \pi) .

Multiplying both sides by x^{1/2}, and suppressing the Heaviside step function, gives

\sum_{m>1} \Lambda(m) m^{1/2} \delta(x-m)

= x^{1/2} + x^{-1/2} - \sum_{z=z_i,\bar{z}_i} x^{iImg(z)} - \frac{x^{3/2}}{1-x^2} - \delta(x-1) \ln(2 \pi) ,

and, letting x=e^{u} for u >0,

\sum_{m>1} \Lambda(m) m^{-1/2} \delta(u-\ln(m))

= e^{u/2} + e^{-u/2} - \sum_{z=z_i,\bar{z}_i} e^{i u Img(z)} - \frac{e^{3u/2}}{1-e^{2u}} - \delta(u) \ln(2 \pi) ,

and,

\sum_{z=z_i,\bar{z}_i} e^{i u Img(z)} = e^{u/2}+e^{-u/2}+ \frac{e^{-u/2}}{1-e^{-2u}}

- \sum_{m>1} \Lambda(m) m^{-1/2} \delta(u-\ln(m)) - \delta(u) \ln(2 \pi).

We can extend this equation to u < 0 as an even function on both sides by changing u to -u on both sides and averaging the two equations together. Taking the Fourier transform w.r.t. u will give an odd function of Dirac delta functions on the LHS located at absiccas equal to the imaginary part of the nontrivial zeros.

The 2007 pdf “What is the Riemann hypothesis?” by Mazur and Stein contains plots of partial sums of the cos[uImg(z_k)] and a discussion. I assume their book extends and elaborates on these fantastic facets of the Riemann zeta.


Appendix I: Mellin Transforms M and Inverse M^{-1}


Given M[f(x)] = \int_0^{\infty} f(x) x^{s-1}dx=F(s).

M^{-1}[F(s)] = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ i \infty} F(s) x^{-s}ds= g(x),

and g(x) = f(x) for suitably chosen \sigma.

1) M[H(x-1)] = \int_1^{\infty} x^{s-1}dx= \frac{x^s}{s}|_1^{\infty} = -1/s for Re(s) < 0 so that the upper evaluation vanishes.

For \sigma < 0,

M^{-1}[-1/s] = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ \infty} -\frac{1}{s} x^{-s}ds = H(x-1)

for if we truncate the vertical integration line with the infinities replaced by some positive finite L/2 and close the contour counter-clockwise to the left with a semicircle of radius L/2 for 0< x < 1, the closed contour contains no singularities, so the contour integral evaluates to 0 and the integral along the semicircle tends to zero as L tends to infinity, so the integration over the vertical line evaluates to zero. On the other hand, if we close clockwise to the right with a semicircle, which introduces an overall negative sign, with x > 1, the closed contour contains a simple pole at the origin and evaluates to unity while the integral along the semicircle vanishes in the limit as L tends to positive infinity.

2) M[H(1-x)] = \int_0^{1} x^{s-1}dx= \frac{x^s}{s}|_0^{1} = 1/s for Re(s) >0 so that the lower eval vanishes.

For \sigma > 0,

M^{-1}[1/s] = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ \infty} \frac{1}{s} x^{-s}ds = H(1-x).

3) M[H(x-1) x^{-1}] = \int_1^{\infty} x^{-1}x^{s-1}dx= -1/(s-1) for Re(s)<1.

For \sigma < 1.

M^{-1}[-1/(s-1)] = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ \infty} -\frac{1}{s-1} x^{-s}ds = H(x-1)x^{-1}.

4) M[H(1-x) x^{-1}] = \int_0^{1} x^{-1}x^{s-1}dx= 1/(s-1),

for Re(s) > 1, and for \sigma > 1,

M^{-1}[1/(s-1)] = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ \infty} \frac{1}{s-1} x^{-s}ds = H(1-x)x^{-1}

5 a.) Now to tackle the inverse Mellin transform of the digamma function di(s) using the representation

di(s+1) = -\gamma + \int_0^1 \frac{1-t^s}{1-t}dt= -\gamma+ \sum_{k=1}^{\infty}[\frac{1}{k}-\frac{1}{s+k}]

= -\gamma + \sum_{k=1}^{\infty}\frac{1}{k}\frac{s}{s+k},

for Re(s) > -1 (or 0 for the integral rep?), where \gamma is the Euler-Masheroni constant.

M^{-1}[di(s+1)] = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ \infty} [ -\gamma + \int_0^1 \frac{1-t^s}{1-t}dt] x^{-s}ds

= -\gamma \delta(x-1) + \int_0^1 \frac{1\delta(x-1)-t\delta(x-t)}{1-t}dt

= -\gamma \delta(x-1) + [\sum_{k>0} \frac{1}{k}] \delta(x-1) - H(1-x) \frac{x}{1-x}

= - \gamma \delta(x-1) + \sum_{k>0} [\frac{1}{k}\delta(x-1) - x^k H(1-x)],

and

M[ - \gamma \delta(x-1) + \sum_{k>0} [\frac{1}{k}\delta(x-1) - x^k H(1-x)]]

= -\gamma + \sum_{k>0}[\frac{1}{k}-\frac{1}{s+k}]= -\gamma + \sum_{k>0}[\frac{1}{k}\frac{s}{s+k}].

for Re(s) > -1, but analytically continues to all complex numbers except the negative natural numbers -1, -2, -3, ... .

Therefore,

5 b) di(1+\frac{1-s}{2})= -\gamma + \sum_{k>0}[\frac{1}{k}\frac{1-s}{(1-s)+2k}] = -\gamma + \sum_{k>0}[\frac{1}{k}\frac{s-1}{s-(2k+1)}] ,

so the inverse Mellin transform for \sigma < 1 closing clockwise to the right for x >1 gives

M^{-1}[di(1+\frac{1-s}{2})] = -\gamma \delta(x-1) + \sum_{k>0} H(x-1)2 x^{-(2k+1)}

= -\gamma \delta(x-1) + H(x-1) \frac{2x}{x^2-1}.

Note the Mellin transform of this expression would have to be regularized to obtain the digamma expression again. This is a common occurence. In fact, regularization is applied to get a Mellin transform for the continuation of the gamma function to the left of its singularities, and the Euler integral expression for the digamma itself is a differently regularized Mellin transform.

5 c) For Mangoldt’s explicit formula, we need to evaluate a slightly different inverse transform of a digamma function:

for \sigma < 1 closing clockwise to the right for x >1 gives

\frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ i \infty} -di(1+\tfrac{1-s}{2}) \frac{x^{-s+1}}{-s+1}ds = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma+ i \infty} [\gamma - \sum_{k>0}[\frac{1}{k}\frac{s-1}{s-(2k+1)}] ] \frac{x^{-s+1}}{-s+1}ds

H(x-1) [di(1) + \sum_{k>0}[\frac{1}{k}x^{-2k}] ] = H(x-1)[\gamma - \ln(1-x^{-2})]

6) A general Mellin transform relation between the Mellin transform of a function and that of an integral of the function through integration by parts:

D_x \int_0^x f(t)dt \frac{x^s}{s}= \int_0^x f(t)dt x^{s-1} + f(x) \frac{x^s}{s},

so integrating over x from 0 to \infty and rearranging terms,

M[\int_0^x f(t)dt] = \int_0^x f(t)dt \frac{x^s}{s}|_0^{\infty} - \int_0^{\infty}f(x)\frac{x^s}{s}dx = -F(s+1)/s

where F(s) = M[f(x)] for suitably chosen Re(s).

Alternatively, with D_x^{-1} f(x) = H(x)\int_0^{x} f(t)dt,

D_x^{-1} f(x) = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} F(s) D_x^{-1 } x^{-s} ds = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} F(s) \frac{x^{-s+1}}{-s+1}ds

\frac{1}{2 \pi i} \int_{\sigma-1 - i \infty}^{\sigma-1 + i \infty} F(\omega+1) \frac{x^{-\omega}}{-\omega}d\omega,

with s = \omega +1.

7) For example, M[H(x-1)] = H(x-1) \int_1^\infty x^{s-1}dx = -1/ s= F(s) for Re(s) < 0,

and -F(s+1)/s = \frac{1}{s(s+1)}= \frac{1}{s}-\frac{1}{s+1}

giving \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [ \frac{1}{s}-\frac{1}{s+1}] x^{-s} ds = H(x-1)(x-1)

for \sigma < -1.

This is consistent with

D_x H(x-1)(x-1) = \delta(x-1) (x-1) + H(x-1)=H(x-1)

and

D_x \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [ \frac{1}{s(s+1)}] x^{-s} ds = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [ \frac{-1}{s+1}] x^{-s-1} ds

=\frac{1}{2 \pi i} \int_{\sigma+1 - i \infty}^{\sigma+1 + i \infty} [ \frac{-1}{\omega}] x^{-\omega} d\omega = H(x-1)

for s = \omega+1 and \sigma < -1,

and with

H(x)\int_0^x H(t-1)dt = H(x-1) \int_1^x dt = H(x-1)(x-1).

8) To evaluate double poles:

= \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [\frac{G(s)}{(s-1)^2}] x^{-s-1} ds

= \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [\frac{G(s)}{(s-1)(s-1+\epsilon)}] x^{-s-1} ds |_{\epsilon \rightarrow 0}

= \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} G(s)[\frac{1}{(s-1}-\frac{1}{s-1+\epsilon)}] \frac{1}{\epsilon} x^{-s-1} ds |_{\epsilon \rightarrow 0}

= H(x-1)\frac{G(1+\epsilon)-G(1)}{\epsilon} |_{\epsilon \rightarrow 0} = H(x-1)G'(1)

for \sigma < 1 and G(s) suitably decaying at infinity to the right of \sigma with no other poles or branch cuts in that region. Depending on G(s), the evaluation could be extended to x < 1.

(At first I thought I needed to eval double poles, but I found a way to circumvent it. I leave this for illustrative purposes of the properties of the inverse Mellin transform.)

(We could also collapse our integration line to just beneath and above the real axis, like a Hankel contour, to the right of \sigma and generate an integral along the real axis containing the derivative of a Dirac delta function.)


Appendix II: Mangoldt’s explicit formala for the Chebyshev function \psi(x)


To relate the analysis here to other derivations, note that g'(w) = \tfrac{dg(w)}{dw}, so

-\tfrac{d \ln(g(s))}{ds} = -\tfrac{g'(s)}{g(s)} and \tfrac{d \ln(g(1-s))}{ds} = -\tfrac{g'(1-s)}{g(1-s)}, and, in particular,

-\tfrac{\zeta'(s)}{\zeta(s)} becomes -\tfrac{\zeta'(1-s)}{\zeta(1-s)}

under the obvious change of variable from s to 1-s.

In the main body of this post, we find the relation between Chebyshev’s function, or Mangoldt’s summatory function, and an inverse Mellin transform of the log derivative of the Riemann zeta. Evaluating this transform, and equating it to other expressions above we arrive at Mangoldt’s explicit formula.

For \sigma > 0,

\psi(x)= \frac{1}{2 \pi i} \int_{\omega - i \infty}^{\omega + \infty} -\frac{\zeta'(s)}{\zeta(s)} \frac{x^s}{s} ds = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} -\frac{\zeta'(1-s)}{\zeta(1-s)} \frac{x^{-s+1}}{-s+1} ds

= \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [- [\frac{1}{s} + \frac{1}{s-1}+ \frac{\ln(\pi)}{2} - \frac{di(\tfrac{1-s}{2})}{2}] + \frac{d\ln[\xi_L(s)]}{ds}] \frac{x^{-s+1}}{-s+1} ds.

= \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} [- [\frac{1}{s} + \frac{\ln(\pi)}{2} - \frac{di(1+\frac{1-s}{2})}{2}] + \frac{d\ln[\xi_L(s)]}{ds}] \frac{x^{-s+1}}{-s+1} ds.

Evaluating term by term and using the identity for the digamma in Appendix I Example 5c,

\psi(x) = H(x-1)[(x-1) - \frac{\ln(\pi)}{2} + \frac{\gamma}{2} - \frac{\ln(1-x^{-2})}{2} - \sum_i [\frac{x^{-z_i+1}}{-z_i+1}+\frac{x^{-\bar{z}_i+1}}{-\bar{z}_i+1}] + \frac{\xi_L'(1)}{\xi_L(1)}]

Since symmetry gives \frac{\xi_L'(1)}{\xi_L(1)} = \frac{\xi_L'(0)}{\xi_L(0)}, and according to “Relations and positivity results for the derivatives of the Riemann function” by Coffey

\frac{\xi_L'(0)}{\xi_L(0)} = -\ln(2) - \frac{\ln(\pi)}{2} - \frac{\gamma}{2} + 1 = .02309...,

\psi(x) = H(x-1)[x - \frac{\ln(1-x^{-2})}{2} - \sum_i [\frac{x^{-z_i+1}}{-z_i+1}+\frac{x^{-\bar{z}_i+1}}{-\bar{z}_i+1}] - \ln(2 \pi)].

Appendix III: Dirac delta combs and approximations

Consider the nascent Dirac delta function given by

\delta_L(x) = \int_{-\infty}^{\infty} Rect[\frac{x}{L}] e^{i 2 \pi \omega x} d\omega= \int_{-L/2}^{L/2} e^{i 2 \pi \omega x} d\omega = \frac{sin(L \pi x)}{\pi x}= sinc[L \pi x].

Then the sifting property holds on functions continuous around the origin and suitably behaved elsewhere:

lim_{L \rightarrow \infty} \int_{-\infty}^{\infty} f(x) \delta_L(x)dx= f(0) .

This can be shown for suitable functions by using the Fourier convolution theorem and then taking the limit, but works for a wider class of functions also.

Note that taking the Fourier transform over finite limits gives us our oscillating sinc function, a nascent delta function, rather than a sharp spike.

Now consider a sum of exponentials of uniformly spaced purely real numbers x_k symmetric about the origin when the spacing between the numbers is \Delta x = \frac{L}{2N+1} so that x_k = k \Delta x for k = -N, -N+1, ..,0, 1,2, ..,N and L = (2N+1) \Delta x. Then

\sum_{-N}^N \exp[i 2 \pi \omega x_k] = \sum _{k=-N}^{N} \exp[i 2 \pi \omega k \Delta x] = \sum _{k=-N}^{N} \exp[i 2 \pi \omega \frac{k L}{2N+1}]

= \sum _{k=-N}^{N} a^{k} = -1 + \tfrac{1-a^{N+1}}{1-a}+ \tfrac{1-a^{-(N+1)}}{1-a^{-1}} = \tfrac{a^{\tfrac{2N+1}{2}}- a^{-\tfrac{2N+1}{2}}}{a^{\tfrac{1}{2}}-a^{-\tfrac{1}{2}}}

= \frac{\exp[i \pi \omega L]- \exp[-i \pi \omega]}{\exp[i\pi \omega \Delta x]-\exp[-i \pi \omega \Delta x]} = \frac{sin[L \pi \omega]}{sin[ \pi \omega \Delta x]}= \frac{sin[(2N+1) \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x]}{ \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x} \frac{ \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x}{sin[\pi \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x]}

for any integer m, so the sum of exponentials gives a periodic function which behaves about \omega = -\frac{m}{\Delta} as

\frac{sin[(2N+1) \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x]}{ \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x} \frac{ \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x}{sin[\pi \pi (\omega+\tfrac{2 \pi m}{\Delta x}) \Delta x]}

which tends to \delta[\omega+\tfrac{2 \pi m}{\Delta x}]

as L and N tend to infinity with \Delta x a fixed constant, giving us our Dirac comb

\sum_{k=-\infty}^{\infty} \exp[i 2 \pi \omega x_k] = \sum_{m=-\infty}^{\infty} \delta(\omega - \frac{m}{\Delta x}).

Of course, the zeros of the Riemann zeta are not evenly spaced, so we can’t expect to find a Dirac comb in our case, but the analysis does suggest at best summing over a finite number of zeros will give us oscillating sinc functions rather than Dirac delta functions.

Appendix IV: Riemann’s explicit formula (added Oct. 2, 2019)

Riemann derived

J(x) = Li(x) - \sum_{z} Li(x^{z})-\ln(2)+\int_x^{\infty} \frac{1}{t(t^2-1))} = H(x) \sum_{n>0} \frac{1}{n} \sum_p H(x-p^n)

where Li(x) = \int_2^x \frac{1}{\ln(t)}dt, the logarithmic integral, and, as above, z denotes the nontrivial zeros.

From the main text above, we have

ln(x) \frac{J(x)}{dx}= \frac{\psi(x)}{dx},

so for x > 1, we have from the analysis in the main text

\frac{d\psi(x)}{dx}= \sum_{m>1} \Lambda(m) \delta(x-m)

= 1 - \sum_{z=z_i,\bar{z}_i} x^{z-1} + \frac{1}{x} + \frac{x}{1-x^2} ,

and from differentiating Riemann’s explicit formula, we obtain consistently

= \ln(x) \frac{J(x)}{dx} = 1 - \sum_{z} x^{z-1} + \frac{1}{x} + \frac{x}{1-x^2}= \sum_{n>0} \frac{1}{n} \ln(p^n) \sum_p \delta(x-p^n)

= \sum_{n>0} \sum_p \ln(p) \delta(x-p^n) .

Appendix V: More basic Mellin transform properties

Dirichlet series are best thought of as inhabiting the inverse Mellin transform space and as the dual of a Dirac delta distribution in real space as shown in my earlier post on the Riemann jump function:

M^{-1}[DS(s)= \sum_{n>0}\frac{a(n)}{n^{s-1}}] = \sum_{n>0} a(n) \delta(x-n) = \widehat{DS}(x)

Then

M^{-1}[F(s)DR(s)] = \sum_{n>0} \frac{a(n)}{n} f(\frac{x}{n}) = \int_{0}^{\infty} \frac{1}{t} f(\frac{x}{t}) \widehat{DS}(t) dt = \int_{0}^{\infty} \frac{1}{t} \widehat{DS}(\frac{x}{t}) f(t) dt

= \int_{0}^{\infty} f(t) \sum_{n>0}\frac{a(n)}{n} \delta(t-\tfrac{x}{n}) dt

Note this becomes \sum_{n>0} \frac{a(n)}{x} f(\frac{n}{x}) if f(x)= \frac{1}{x} f(\frac{1}{x}) or, equivalently, F(s) = F(1-s). See the famous example in my answer to the Math Stackexchange question “Does the functional equation f(1/r)= rf(r) have any nontrivial solutions … ?”

What is the inverse Mellin transform of \zeta(1-s)/\zeta(s) for \sigma = 1/2?

Appendix VI: Some useful properties of the Dirac delta

Under construction:

1) Sifting property

We define the Dirac delta “function” by its sifting property acting within an integral on a function suitably behaved about a small neighborhood of the point where the argument of the delta function vanishes:

\int_{-\infty}^{\infty}f(t) \delta(t-x)dt = f(x),

so we have

H(x) \ln(x) \delta(x-y) = H(y))\ln(y) \delta(x-y).

The singularity at the origin is no problem if y >0 since by integration by parts

D_x H(x) f(x)[x \ln(x)-1]\delta(x-y)]

= \delta(x)f(x)[[x \ln(x)-1]\delta(x-y)]]+H(x)f(x)\ln(x)\delta(x-y)

+ H(x)f(x)[x \ln(x)-1]\delta'(x-y)]+H(x)f'(x)[x \ln(x)-1]\delta(x-y)],

so

I = \int_{-\infty}^{\infty}H(t) \ln(t) f(t) \delta(t-x)dt

= -\int_{-\infty}^{\infty}[H(t)f(t)[t \ln(t)-1]\delta'(t-x)]+H(t)f'(t)[t \ln(t)-1]\delta(t-x)]]dt.

This has no singularity at the origin if f(x) doesn’t.

Now taking the derivative of the sift equation above, we obtain

\int_{-\infty}^{\infty}f(t) \delta'(t-x)dt = -f'(x), so

I = f'(x)[x \ln(x)-1]+ f(x)\ln(x) - f'(x)[x \ln(x)-1]=f(x)\ln(x).

(In fact, the relevant functions dealt with in the main text are null at the origin and for some interval to the right of it.)

Changing variables we derive other properties.

2) Even symmetry property

Let u=x-t, then

f(x) = \int_{\infty}^{-\infty}f(x-u) \delta(-u)(-du)=f(x),

=\int_{-\infty}^{\infty}f(x-u) \delta(-u)du, but also

f(x)=\int_{-\infty}^{\infty}f(x-u) \delta(u)du,

so the Dirac delta is an even “function”; i.e.,

\delta(-x) = \delta(x).

3) Reciprocal scaling property

With I = \int_{-\infty}^{\infty}f(t) \delta[a(t-x)]dt,

let u = a(t-x). Then for a > 0,

I = \int_{-\infty}^{\infty}f(\frac{u}{a}+x) \delta[u]\frac{1}{a}du= \frac{f(x)}{a} = \int_{-\infty}^{\infty}f(t) \frac{\delta(t-x)}{a}dt,

and for a < 0,

I = \int_{-\infty}^{\infty}f(\frac{u}{a}+x) \delta[u]\frac{1}{-a}du= \frac{f(x)}{-a} = \int_{-\infty}^{\infty}f(t) \frac{\delta(t-x)}{|a|}dt.

Therefore, \delta(a(t-x)) = \frac{\delta(t-x)}{|a|}.

4) Composition property

Let g(t) = g(t_0) + g'(t_0)(t-t_0)+...= g'(t_0)(t-t_0)+... with only one zero and that at t = t_0, then since we are only concerned with where the argument of the delta vanishes

I(u) = \int_{-\infty}^{\infty}f(t) \delta(g(t))dt

= \int_{-\infty}^{\infty}f(t) \delta[g'(t_0)(t-t_0)]dt .

= \int_{-\infty}^{\infty}f(t) \frac{\delta(t-t_0)}{|g'(t_0)|}dt = \frac{f(t_0)}{|g'(t_0)|}.

5) The derivative property

Clearly the derivative of the delta is odd since the delta is even:

\delta(x-y) =\delta(y-x), so

D_x \delta(x-y) = \delta'(x-y) = D_x \delta(y-x) = -\delta'(y-x).

Use the limit of the Newton-Fermat quotient,

\int_{-\infty}^{\infty} f(x) \frac{\delta(x+\epsilon) -\delta(x)}{\epsilon}dx= \frac{f(-\epsilon)-f(0)}{\epsilon}

in the limit gives

\int_{-\infty}^{\infty} f(x) \delta'(x)dx = -f'(0).

Similarly, by a shift, or change of varisble,

\int_{-\infty}^{\infty} f(x) \delta'(x-y)dx = -f'(y).

6) The BYOYB property

Beware! The magical delta can bite you on your butt at a moment’s notice if you aren’t careful. For example, the nascent Dirac delta function discussed above

\frac{sin[L \pi x]}{\pi x}

is sometimes said to approach the Dirac delta as L approaches infinity yet it lacks the important symmetry

\delta(x-1) = \frac{1}{x} \delta(\frac{1}{x}-1) = \delta(\frac{1}{x}-1)

that follows from the properties above. This apparent quandary is resolved once the the nascent function is embedded in an integral:

\int_{-\infty}^{\infty} \frac{sin[L \pi (x-1)]}{\pi(x-1)} f(x)dx \rightarrow f(1) as L \rightarrow \infty

\int_{-\infty}^{\infty} \frac{1}{x} \frac{sin[L \pi (\frac{1}{x}-1)]}{\pi(\frac{1}{x}-1)} f(x)dx = \int_{-\infty}^{\infty}  \frac{sin[\frac{L}{x} \pi (1-x)]}{\pi(1-x)} f(x)dx \rightarrow f(1) as L \rightarrow \infty

Related Stuff:

Trying to corroborate my analysis, I found consistent results and extensions

1) “Notes on the Riemann hypothesis” by Peraz-Marco

2) The Prime Number Theorem: a proof outline: at the Number Theory and Physics Archives

3) Chebyshev function at Wikipedia

4) Beurling zeta functions, generalized primes, and fractal membranes by Hilberdink and Lapidus

5) Spectral analysis and the Riemann hypothesis by Lachaud

6) Notes on the Riemann hypothesis by Perez-Marco

7) An essay on the Riemann Hypothesis by Alain Connes

Added on Oct. 3, 2019:

8) “The Riemann hypothesis explained” by Veisdal (blog post). A quick general intro to the Riemann zeta, RH, and the prime number theorem.

9) “Riemann’s Explicit Formula” by Sean Li. Derivations with convergence arguments provided.

More

10) “A history of the prime number theorem” by Anita Alexander

11) “A history of the prime number theorem” by Goldstein

12) “Mellin convolution and its extensions, Perron formula, and explicit formulae” by Jose Javier Garcia Moreta. An analysis also based on simple Mellin transform properties.

13) Convergence of Riemann spectrum/Fourier transform of prime powers a MSE question posed by Joe Knapp and answered by reuns

14) “Twenty female mathematicians” by H. Williams. (See the section on Mirzakhani.)

15) “Some remarks on the Riemann zeta distribution” by Allan Gut

This entry was posted in Math and tagged , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s