*This series is divergent, so we may be able to do something with it. — Heaviside*

The divergent series for the pole of the Riemann zeta function is Lets’s use Mellin transform interpolation (essentially the master’s (Ramanujan) master formula) to interpolate the harmonic numbers , the partial sums of the divergent series, in the hope that we can glean some global numerics of the Riemann zeta. The digamma function and its various avatars will naturally spring forth.

But first these excerpts:

The world of ideas is ‘one’ – i.e., it is a cohesive living organism in which all ‘parts’ interact and where even slight stimulations propagate producing echoes in the (seemingly) distant organism which may be called theories or ‘branches of mathematics’. Similarly, like in the Weierstrass-Riemann principle of ‘analytic continuation,” a change of a (meromorphic) function, even within a very small domain (environment), affects through analytic continuation the whole of Riemann surface, or analytic manifold. Riemann was a master in applying this principle and also the first who noticed and emphasized that a meromorphic function is determined by its ‘singularities’. — Maurin

*Euler was probably the first to see that these series can be applied to number theory. He was in correspondence with C. Goldbach and J.L Lagrange just on number theory questions. His proof of the existence of innitely many primes uses the divergence of the harmonic series* * and using the above fundamental theorem of arithmetic which says that every natural number can uniquely be written as a product of power*s of primes.

**I) ***The digamma function as an interpolation of the harmonic numbers*

The Ramanujan-Mellin-Newton interpolation formula, using the umbral notation lowering of superscripts and Euler’s integral formula for the gamma function, is

.

The shortest route to the Newton series is through the two integrals on the last line where we directly employ an umbral Euler’s integral (UEI)

.

where the notation denotes we must develop a power series rep before umbrally lowering superscripts.

Note for where c is some positive constant (or operator) that this UEI gives as the interpolation of the Taylor series coefficients (TSC) of and that the UEI, as well as the Newton series, gives the th TSC when allowing the interpretation

for which

.

Here denotes the Heaviside step function.

Returning to our Newton series for interpolating , notice that we have not provided a value for yet.

Euler provided the obvious integral formula

,

giving if we perturb about 0.

Taking finite differences of the harmonic integral

Now plugging Euler’s harmonic integral into the Newton series

.

Implying, when we expand , that

,

for .

**II) The digamma meets the Riemann zeta through the log derivative of the rising factorial: Power sum series for the digamma function**

One way of generating such poles is to take a limit as tends to infinity of the logarithmic derivative of a simple product, the rising factorials,

,

and lo and behold we have our digamma or function.

Then in the limit, this becomes

and the Euler-Mascheroni constant materializes. We note that this also shows as .

Such products lie at the foundations of the fomalism of symmetric polynomials/functions–the elementary, homogeneous complete, and power sum symmetric polynomials, and we now use the common maneuver of relating this logarithm of products to the power sums which turn out to be for .

Focusing on the last summand,

.

Then

and we have our anticipated connection of global, yet isolated, numerics of the Riemann function to its behavior just to the right of its single pole.

**III) Summary of formulas for the digamma function**

Reprising, with 20/20 hindsight we have established a connection of the partial sums of the divergent series for , aka the harmonic numbers, through their shifted interpolation, the digamma function, to the values of the Riemann zeta at the natural numbers greater than one. Hidden in the foliage are the rising factorials, whose polynomials contain the Stirling numbers of the first kind as their coefficients, and the power sum symmetric polynomials.

There is an associated sinc function interpolation also :

Collecting our reps of so far, we have

for

for

for

where

,

and

with .

*IV) Mellin transform space and the digamma function*

Now let’s morph our interpolation in the inverse Mellin space of complex numbers back to the real space.

, so

, and

.

There is no convergence problem since

.

Then the inverse Mellin transform gives

,

which is in agreement with the Bateman/Erdelyi Tables of Integral Transforms.

A direct inverse transform gives,

The function summand can not be futher reduced and is to be interpreted as the regularization summand.

Same holds from

Interpreting and checking,

Also

consistent with

.

The equivalent inverse transform (apply inside the line integral below) is

A change of variable transforms this line integral into , so checking consistency by taking the Mellin tranform, for ,

(The digamma manifests in previous entries in connection with the trivial zeros of the Riemann zeta. See the post “Jumpin’ Riemann …”, in which we require another inverse Mellin transform.)

**V) The digamma diff op as a raising/creation op and an infinitesimal generator for fractional calculus**

The digamma has popped up in previous entries as the differential component of a raising op for the Appell polynomials of the gamma genus that form a basis for a fractional calculus and, equivalently, as the differential component of the infinitesimal generator for the fractional (actually, real powers of) integrals and derivatives of that calculus.

*V 1.) Operational calculus, finite differences, and Mellin/Newton interpolation*

is an eigenfunction of the Euler, or state number, op and, by definition, with the eigenvalues and , the falling factorial, respectively; that is,

and

so we can expect both to be intimately connected to the Mellin transform and Newton/Mellin interpolation.

In addition, they are associated to an important pair of umbrally inverse, binomial Sheffer polynomial sequences with significnt combinatorial interpretations–the Bell polynomials and the falling factorials, denoted above, with coefficients the Stirling numbers of the second kind and the first kind, respectively. These can be easily generalized to multinomial partition polynomials–the general Bell polynomials, OEIS A036040, or the refined Stirling partition polynomials of the second kind, and the cycle index partition polynomials for the symmetric groups, or refined Stirlng partition polynomials of the first kind, OEIS A036039.

With the convenience of notation and the fomulaic suggestivity/heuristic benefits provided by the umbral maneuver and with , we have some formulas that lie at the heart of the umbral operational calculus:

so

and, therefore,

We also have our generalized Taylor series from a generalized shift/translation operator acting on :

So, the umbral operational calculus shadows that of the binomial convolution hence the descriptor umbral.

Now back to our Sheffer sequences and their op reps. Define the Bell polynomials operationally through exponentiation of the number op, or Euler op, as

.

Then the e.g.f. for the Bell polynomials is

Note also

since vanishes for any polynomial of order for .

The Bell numbers are given by , so

This is called the Dobinski formula (explained combinatorially by Rota), a particular case of the more general formula

with and .

Operationally, using Taylor series,

so that operating formally on a function

where .

Similarly, acting on a function not necessarily analytic at the origin,

where

Another useful differential formula (a variation on the theme) is

so

where

or, more generally,

Also, when operating on a function analytic at the origin,

The following identity is useful in moving in the reverse direction–obtaining operational identities from Newton series/Mellin interpolations:

so

,

i.e., the falling factorials and the Bell polynomials are an umbral inverse pair, which satisfy

,

with

where

essentially equivalent to multiplication of matrix inverse pairs.

Now we are ready to generalize to evaluations of the action of more complex operators through the inverse Mellin transform, defining

or

where

Then

*V 2.) The digamma in a raising op for an Appell sequence*

This section will incorporate material from several earlier entries and MathOverflow and Math Stackexchange questions I posed:

A.) The Raising / Creation Operators for Appell Sequences, a blog post and pdf.

B.) Riemann Zeta Function at Positive Integers and an Appell Sequence of Polynomials, a MathOverflow question

C.) Lie Group Heuristics for a Raising Operator for , a Math Stackexchange question

D) Multiple Zeta Values Related to Fractional Calculus and an Appell Polynomial Sequence, a MathOverflow question

E) Cycling Through the Zeta Garden: Zeta Functions for Graphs, Cycle Index Polynomials, and Determinants, a MathOverflow question

F) Goin’ with the Flow: The Logarithm of the Derivative Operator, a blog post and pdfmmm

G) Fractional Calculus, Gamma Classes, the Riemann Zeta Function, and an Appell Pair of Sequences, a blog post

H) Fractional Calculus and Interpolation of Generalized Binomial Coefficients, a blog post

I) Fractional Calculus, Interpolation, and Travelling Waves, a blog post and pdf

J) Mellin Interpolation of Differential Ops and Associated Infingens and Appell Polynomials: The Ordered, Laguerre, and Scherk-Witt-Lie Diff Ops, a blog post

]]>By virtue of the relation between the values of the Riemann zeta function at the negative integers, , and the Bernoulli numbers and between the Bernoulli polynomials and the partial sums of the powers of the natural numbers and derivatives of analytic functions, the Riemann zeta can be related to the integration and differentiation of analytic functions.

Through the relation between the values of the Riemann zeta function at the positive natural numbers greater than one, , and a series expansion of the digamma function and between a digamma differential operator and the infinigen (infinitesimal generator) of a fractional calculus, the Riemann zeta can be related to the fractional calculus-the calculus of fractional integral and differential operators acting on real functions analytic on the positive real axis. .

]]>In response to observations initiated by Matt McIrvin of a sum of exponentials of the imaginary part of the non-trivial zeroes of the Riemann zeta function, assuming the Riemann hypothesis is true, as presented on a stream through Mathstackexchange (MSE), Mathoverflow (MO), and the n-Category Cafe. One thread is the MO-Q Quasicrystals and the Riemann Hypothesis posed by John Baez.

The main actors are the Riemann zeta function , the Landau Xi function (aka, the Riemann Xi function with the two poles removed), the von Mangoldt function , the Chebyshev function (aka, the von Mangoldt summatory function), and the Riemann jump function (aka, the Riemann prime number counting function) with Mellin, Heaviside, and Dirac directing, with a cameo by Fourier.

We formally rederive a relationship beween the zeroes of the Riemann zeta function and the powers of the primes (PP henceforth will be our acronym for powers of the primes or prime powers, the , where is a prime and ) that manifests itself in spikes observed at locations of the imaginary part of the nontrivial zeroes–Dirac delta functions arising from taking a Fourier transform of the derivative of a function that sums over the log of PP, the Mangoldt summatory function, aka, the Chebyshev staircase function, aka, the morphed Riemann jump function for counting the primes.

In fact, the Riemann jump function and the Chebyshev staircase function are two sides of the same coin. The Riemann is a sum of Heaviside step functions, , with the edges of the steps located at the powers of the primes (PP) while the Chebyshev gives the same but with edges located at the log of the PP with different step lifts. One can simply be rewritten into the other with a change of parameters. Consequently, taking the derivative of the functions gives us Dirac delta functions located at PP or, alternatively, their logs. And both are directly related to an inverse Mellin transform of the logarithmic derivative of the Riemann zeta.

**Basic Algorithm: Logging product formulas for polynomials**** **

**Step I:** Factor the polynomial or rational function,

**Step II**: Take the logarithmic derivative,

.

**Step III:** Take the inverse Mellin transform:

for , i.e., putting the line of integration to the left of all the zeros, and closing the contour counter-clockwise to the right,

.

**Step IV: **Substitute for for obtaining,

**Step V: **Extend the result as an even function to ,

**Step VI**: Take the Fourier transform to obtain Dirac delta functions at the zeros and the pole of and their negatives,

The basic algorithm is to take logarithmic derivatives of product formulas for a “polynomial” to get a sum of simple poles at the zeroes , or at some other parameters characterizing the “polynomial”, and then applying the linear inverse Mellin transform to turn these poles into a sum of exponentiated terms . Making a simple change of variable gives us a sum of exponentials of the zeroes , or the other parameters–tantamount to taking an inverse Laplace transform of the simple poles.

Our “polynomials” are the Riemann zeta, expressed by the Euler product formula for zeta in terms of the PP , and its equivalent the Landau Xi function, expressed by the Hadamard product formula in terms of its nontrivial zeros, which are the same as zeta’s. The Landau Xi function is the Riemann zeta function with its pole and trivial zeroes removed by multiplying by some fairly simple factors–the most complicated being a gamma function whose singularities remove the simple zeroes (remove them initially but reintroduce them in the subsequent analysis through the poles of the digamma function–the logarithmic derivative of the gamma function.)

In the final analysis we have a relation among the locations of PP through the derivative of the Riemann jump function; the locations of the log of the powers of the primes through the derivative of the Chebyshev function, both derivations related to the inverse Mellin transform of the logarithmic derivative of Euler’s product formula; and the locations of the non-trivial zeros of the Riemann zeta function through the inverse Mellin transform of the logarithmic derivative of Hadamard’s product formula for the Landau Xi, in terms of the nontrivial zeros.

*Detailed Formal Analysis*

Let’s work through the analysis in more detail.

With the Heaviside step function and designating a prime,

where are the natural numbers, since

for with a natural number and vanishes otherwise. Then employing the Dirac delta function

so

where .

Now look at a property of the Mellin transform. If

,

then for suitably behaved functions

and, from the earlier post on the Riemann jump function,

implying, for ,

Therefore,

Now make the change of variable , giving

which agrees with the rep above of the zeta fct. ratio and the inverse Laplace transform rep of a delta fct.

The last linear contour integral is essentially an inverse Laplace transform. At this point, it would be good to do a sanity check by numerically evaluating the line integral over finite limits by replacing the infinities by , some finite extent. This will give sinc functions for the delta functions. But, let’s continue.

To get some relation to the non-trivial zeros, multiply the top and bottom of the zeta function ratio to convert the denominator to Landau’s , an entire function symmetric about , with no trivial zeros, and the same non–trivial zeros as the Riemann zeta :

.

Let’s see if we can tease out a by taking the derivative of and comparing it to the numerator of our ratio. We can then make use of the Hadamard product representation of this log to relate this to pairs of the non-trivial zeros.

Differentiating term by term and using the digamma or psi function (due to a conflict of notation with the Chebyshev function, I shall use for the digamma function):

so

and our ratio becomes

.

The Hadamard product formula gives

where the sum is over the zeros above the real axis and the lower zeroes are entered through taking the complex conjugate.

Taking the derivative,

Now appealing to Mangoldt’s formula, formally derived in Appendix II from the formula’s above, we have for

Taking the derivative and relating this to our other expression above,

Multiplying both sides by , and suppressing the Heaviside step function, gives

and, letting for ,

,

and,

.

We can extend this equation to as an even function on both sides by changing to on both sides and averaging the two equations together. Taking the Fourier transform w.r.t. will give an odd function of Dirac delta functions on the LHS located at absiccas equal to the imaginary part of the nontrivial zeros.

The 2007 pdf “What is the Riemann hypothesis?” by Mazur and Stein contains plots of partial sums of the and a discussion. I assume their book extends and elaborates on these fantastic facets of the Riemann zeta.

**Appendix I: Mellin Transforms an****d Inverse **

Given .

,

and for suitably chosen .

**1) ** for so that the upper evaluation vanishes.

For ,

for if we truncate the vertical integration line with the infinities replaced by some positive finite and close the contour counter-clockwise to the left with a semicircle of radius for , the closed contour contains no singularities, so the contour integral evaluates to and the integral along the semicircle tends to zero as tends to infinity, so the integration over the vertical line evaluates to zero. On the other hand, if we close clockwise to the right with a semicircle, which introduces an overall negative sign, with , the closed contour contains a simple pole at the origin and evaluates to unity while the integral along the semicircle vanishes in the limit as tends to positive infinity.

**2)** for so that the lower eval vanishes.

For ,

.

**3) ** for .

For .

.

**4) **,

for , and for ,

**5 a.) **Now to tackle the inverse Mellin transform of the digamma function using the representation

,

for (or 0 for the integral rep?), where is the Euler-Masheroni constant.

,

and

.

for , but analytically continues to all complex numbers except the negative natural numbers .

Therefore,

**5 b)** ,

so the inverse Mellin transform for closing clockwise to the right for gives

Note the Mellin transform of this expression would have to be regularized to obtain the digamma expression again. This is a common occurence. In fact, regularization is applied to get a Mellin transform for the continuation of the gamma function to the left of its singularities, and the Euler integral expression for the digamma itself is a differently regularized Mellin transform.

**5 c)** For Mangoldt’s explicit formula, we need to evaluate a slightly different inverse transform of a digamma function:

for closing clockwise to the right for gives

**6) **A general Mellin transform relation between the Mellin transform of a function and that of an integral of the function through integration by parts:

so integrating over from to and rearranging terms,

where for suitably chosen .

Alternatively, with ,

with .

**7)** For example, for ,

and

giving

for .

This is consistent with

and

for and ,

and with

.

**8)** To evaluate double poles:

for and suitably decaying at infinity to the right of with no other poles or branch cuts in that region. Depending on , the evaluation could be extended to .

(At first I thought I needed to eval double poles, but I found a way to circumvent it. I leave this for illustrative purposes of the properties of the inverse Mellin transform.)

(We could also collapse our integration line to just beneath and above the real axis, like a Hankel contour, to the right of and generate an integral along the real axis containing the derivative of a Dirac delta function.)

*Appendix II: **M***angoldt’s explicit formala for the Chebyshev function **

To relate the analysis here to other derivations, note that , so

and , and, in particular,

becomes

under the obvious change of variable from to .

In the main body of this post, we find the relation between Chebyshev’s function, or Mangoldt’s summatory function, and an inverse Mellin transform of the log derivative of the Riemann zeta. Evaluating this transform, and equating it to other expressions above we arrive at Mangoldt’s explicit formula.

For ,

Evaluating term by term and using the identity for the digamma in Appendix I Example 5c,

Since symmetry gives , and according to “Relations and positivity results for the derivatives of the Riemann function” by Coffey

,

.

*Appendix III: Dirac delta combs and approximations*

Consider the nascent Dirac delta function given by

.

Then the sifting property holds on functions continuous around the origin and suitably behaved elsewhere:

.

This can be shown for suitable functions by using the Fourier convolution theorem and then taking the limit, but works for a wider class of functions also.

Note that taking the Fourier transform over finite limits gives us our oscillating function, a nascent delta function, rather than a sharp spike.

Now consider a sum of exponentials of uniformly spaced purely real numbers symmetric about the origin when the spacing between the numbers is so that for and . Then

for any integer , so the sum of exponentials gives a periodic function which behaves about as

which tends to

as and tend to infinity with a fixed constant, giving us our Dirac comb

.

Of course, the zeros of the Riemann zeta are not evenly spaced, so we can’t expect to find a Dirac comb in our case, but the analysis does suggest at best summing over a finite number of zeros will give us oscillating sinc functions rather than Dirac delta functions.

**Appendix IV:*** ** Riemann’s explicit formula* (added Oct. 2, 2019)

Riemann derived

where , the logarithmic integral, and, as above, denotes the nontrivial zeros.

From the main text above, we have

,

so for , we have from the analysis in the main text

,

and from differentiating Riemann’s explicit formula, we obtain consistently

.

**Appendix V:** **More basic Mellin transform properties**

Dirichlet series are best thought of as inhabiting the inverse Mellin transform space and as the dual of a Dirac delta distribution in real space as shown in my earlier post on the Riemann jump function:

Then

Note this becomes if or, equivalently, . See the famous example in my answer to the Math Stackexchange question “Does the functional equation have any nontrivial solutions … ?”

What is the inverse Mellin transform of for ?

** Appendix VI**:

Under construction:

1) Sifting property

We define the Dirac delta “function” by its sifting property acting within an integral on a function suitably behaved about a small neighborhood of the point where the argument of the delta function vanishes:

,

so we have

.

The singularity at the origin is no problem if since by integration by parts

,

so

.

This has no singularity at the origin if doesn’t.

Now taking the derivative of the sift equation above, we obtain

, so

.

(In fact, the relevant functions dealt with in the main text are null at the origin and for some interval to the right of it.)

Changing variables we derive other properties.

2) Even symmetry property

Let , then

,

, but also

,

so the Dirac delta is an even “function”; i.e.,

.

3) Reciprocal scaling property

With ,

let . Then for ,

,

and for ,

Therefore, .

4) Composition property

Let with only one zero and that at , then since we are only concerned with where the argument of the delta vanishes

.

.

5) The derivative property

Clearly the derivative of the delta is odd since the delta is even:

, so

.

Use the limit of the Newton-Fermat quotient,

in the limit gives

.

Similarly, by a shift, or change of varisble,

.

6) The BYOYB property

Beware! The magical delta can bite you on your butt at a moment’s notice if you aren’t careful. For example, the nascent Dirac delta function discussed above

is sometimes said to approach the Dirac delta as approaches infinity yet it lacks the important symmetry

that follows from the properties above. This apparent quandary is resolved once the the nascent function is embedded in an integral:

as

as

**Related Stuff:**

Trying to corroborate my analysis, I found consistent results and extensions

1) “Notes on the Riemann hypothesis” by Peraz-Marco

2) The Prime Number Theorem: a proof outline: at the Number Theory and Physics Archives

3) Chebyshev function at Wikipedia

4) Beurling zeta functions, generalized primes, and fractal membranes by Hilberdink and Lapidus

5) Spectral analysis and the Riemann hypothesis by Lachaud

6) Notes on the Riemann hypothesis by Perez-Marco

7) An essay on the Riemann Hypothesis by Alain Connes

Added on Oct. 3, 2019:

8) “The Riemann hypothesis explained” by Veisdal (blog post). A quick general intro to the Riemann zeta, RH, and the prime number theorem.

9) “Riemann’s Explicit Formula” by Sean Li. Derivations with convergence arguments provided.

More

10) “A history of the prime number theorem” by Anita Alexander

11) “A history of the prime number theorem” by Goldstein

12) “Mellin convolution and its extensions, Perron formula, and explicit formulae” by Jose Javier Garcia Moreta. An analysis also based on simple Mellin transform properties.

13) Convergence of Riemann spectrum/Fourier transform of prime powers a MSE question posed by Joe Knapp and answered by reuns

14) “Twenty female mathematicians” by H. Williams. (See the section on Mirzakhani.)

15) “Some remarks on the Riemann zeta distribution” by Allan Gut

]]>**d’Alembert**:

*Go forward, faith will follow! *

**Laurent** **Schwartz**:

I *have always thought that morality in politics was something essential, just like feelings and affinities.*

*To discover something in mathematics is to overcome an inhibition and a tradition. You cannot move forward if you are not subversive.*

**Jean Baptiste Joseph Fourier** (1768–1830)

*Mathematics compares the most diverse phenomena and discovers the secret analogies that unite them. *

**Richard Feyman**

*Physics is imagination in a straitjacket.*

*When in doubt, integrate by parts.*

*We leave the operators, as Jeans said,* “*hungry for something to differentiate.”*

**Sophia Kovalevskaya** (1850–1891)

*It is not possible to be a mathematician without being a poet at heart.*

**Anonymous quote (in an anecdote, probably by Gian-Carlo Rota)**

*I do discrete, not continuous.*

**Anonymous, paraphrased**

*Schwartz’s theory of distributions is an example of the French propensity to turn an operation into a theory–in this case, integration by parts.*

**Hermann Weyl**

*In these days the angel of topology and the devil of abstract algebra fight for the soul of every individual discipline of mathematics.*

,

,

,

.

Tis method to my madness of writing these relations in a roundabout way by letting . I want to find relations between the two families of symmetric polynomials— the elementary symmetric polynomials (ESPs), denoted by , and the complete homogeneous symmetric polynomials (CSPs), denoted by , whose o.g.f.s and are essentially reciprocal to each other:

.

Nowhere will I use the explicit definitions of these symmetric polynomials so the relations uncovered will apply to basically any generic o.g.f. and the polynomial sequences formed from it and its shifted multiplicative and compostional inverses.

With these definitions the generating functions for the Appell sequence for become

,

so ,

where . The first few polynomials are

,

,

.

The numerical coefficients are given by OEIS A094587 and A008279.

The other sequence of Appel polynomials depend on the coefficients of the power series for the multiplicative inverse of which is from A134264 which gives the multiplicative inverse in terms of the coefficients of the power series of the functions shifted reciprocal as the signed refined partition polynomials for the noncrossing partitions $NC_n$

with the first few partition polynomials

,

,

,

.

An alternative is to use the inversion scheme of A133437 involving the signed refined face partition polynomials of the Stasheff polytopes, or associahedra. Then

with the first few polynomials

,

,

.

OEIS A263633 can be used to convert between the ESPs and CSPs and so also between the and .

The first few conversions are

,

,

.

(The relation is an involution satisfied when and are interchanged.)

Now return to the other Appell sequence defined by

.

Then

where . The first few are

,

,

.

Now for the two binomial Sheffer sequences, we will use the refined Lah partition polynomials described in the Lagrange a la Lah pdf (Part I, cf. also A130561, these are the normalized, signed elementary Schur polynomials) with the e.g.f.

.

The first few polynomials are

,

,

,

,

.

Then

with the first few being

,

,

,

.

Similarly,

with the first few being

,

,

,

.

(Recall that converting the ESPs into CSPs gives products of also.)

The earlier post establishes the umbral similarity transformations (the row by row equivalent of a matrix similarity transformation)

and

.

Let’s do some spot checks. First,

and

,

which is consistent.

Second check:

,

and

.

With , we obtain, for ,

where .

For example,

,

and, substituting for ,

.

Reprising, the refined Lah polynomials (aka, the elementary Schur polynomials) can be used to generate the coefficients of the reciprocal, or multiplicatve inverse,

of an o.g.f.

from the coefficients of the power series of the shifted o.g.f.’s compositional inverse

.

(Note that could be substituted into the above equations.)

This is a rather roundabout method to determine reciprocals when A263633 could be used more efficiently given the , but it establishes transformations among partition polynomial sequences that encode information about important combinatorial constructs–permutations, associahedra, noncrossing partitions–and operations in analysis–multiplicative and compositional inversion.

The associations via the refined Lah polynomials are by virtue of the fact that the pair of Appell polynomials (which are *not* an inverse pair under umbral composition) are related to each other by a similarity transformation by an inverse pair of lower triangular matrices with the row polynomials and , which are an inverse pair under umbral composition of the row polynomials, i.e., . Since these Appell polynomials can be generated by differential raising operators, these results can be recoded in terms of a pair of Appell raising operators or differential transform operators that transform each into an Appell polynomial.

Again, nowhere have explicit constructions of the ESPs or CSPs been required, so the formalism above applies to a generic o.g.f. substituted for or .

Finally, we find another of my favorite families of convex polytopes–the permutahedra–if we relate to via the signed, refined face partition polynomials (cf. A133314) of the permutahedra.

Edit (Oct. 1, 2019):

In “Topics in topology. Fall 2008: The signature theorem and some of its applications” by Liviu I. Nicolaescu, we find on page 85 two generating series related to L genuses and multiplicative sequences, and , to which the above formalism can easily be applied since they are related by

and

where .

OEIS A133932 could be used to compute directly from the coefficients .

]]>One day last fall in a class, several curious 12-th graders marvelled at the relationship I showed them between the Pascal triangle (OEIS A007318) and the enumerative geometry of triangles and squares and their -dimensional extensions/abstractions the hypertriangles (HTs) and hypersquares (HSs) (or, equivalently, the tetrahedrons and hypertetrahedrons, and the cubes and hypercubes). By looking at certain physico-geometric ways of generating the -dimensional extensions, we can relate simple algebraic manipulations–multiplication of polynomials and a matrix by itself–to counting the components of these geometric constructs, enumerated by their face-polynomials.

First, let’s tackle the HT beasties. The face-polynom, or f-polynom, of a polytope, such as a HT or HS, has coefficients enumerating the number of -D faces of the polytope. For example, the -D HT is a triangle with the associated f-polynom

,

enumerating the 3 vertices (-D faces), 3 edges (-D faces), and 1 triangle (-D face) of a triangle. The -D HT is a tetrahedron with f-polynom

,

corresponding to the 4 vertices, 6 edges, 4 triangles, and 1 tetrahedron comprising its faces. (Note that in the general literature the -D faces of either -D polytope can be referred to as their facets.) The general formula that follows from the forthcoming method of constructing the HTs for the f-polynom of the -D HT is

.

Now for the recursive physico-geometric construction of an -D HT from an -D HT, consider the -D HT, a line segment. Move from the -D HT in a direction perpendicular to it and place a point some distance from it. Then draw line segments connecting each vertex of the -D HT to the new vertex. We have generated an instance of the -D HT, a triangle.

To obtain the -D HT, the tetrahedron, just iterate on the algorithm for generating the triangle by replacing each ocurrence of -D by -D. That is, move in a direction perpendicular to the plane of the triangle, -D HT, place a point some distance away from it, and connect that new vertex to each old vertex, and you have a -D HT, a tetrahedron. For higher dimensions, repeat the procedure ad nauseum.

Now we are in a position to relate multiplication of by to obtain the formula for , the f-polynom of the -D hypertriangle. Note that the coefficients of the polynomial (the elements of the -th row of the Pascal triangle) alledgely enumerate the number of vertices, (), edges (), triangles (), etc., that comprise the -D HT. From the geometric construction, the new -D HT has one more vertex than the old -D HT. This is reflected in the multiplication in the factors . The new HT has as many edges as the old plus an edge for each old vertex giving Similarly, the new HT has as many triangles as the old plus a new one for each old edge; i.e., . And so on for the higher dimensional faces.

Following this train of physico-geometric intuition mapped to this particularly simple algebraic recursion relation, we have established the general f-polynom is indeed

,

and in some sense defined what hypertriangles are.

Now we are ready to look at the f-polynoms of the hypersquares (HSs) and show that their coefficients are generated by squaring the Pascal matrix as a lower triangular matrix, i.e., by squaring a triangle. This is equivalent to umbral substitution of into itself; that is, the face-polynomial for the -D HS is . For example,

,

enumerating the faces of a square (4 vertices, 4 edges, and 1 square). This can be simplified as

.

The method of physico-geometric construction of a new -D HS from an old -D HS is to drag the old through a dimension perpendicular to it letting the edges fill in new squares. For example, drag a line segment in a direction perpendicular to it and a square is obtained. Drag a square in a direction perpendicular to the plane of the square and a cube is obtained. Check that correctly enumerates the faces of a cube, and develop algebraic arguments analogous to those for the hypertriangles to prove the general formula for Do a numerical check that squaring the Pascal matrix as a lower triangular matrix (i.e., a matrix with zeros above the main diagonal) gives the same result as the umbral substitution (cf. A038207).

From here paths meander through glades of analysis, geometry, and combinatorics, roamed by a hybrid of aspects of the hypertriangles and hypersquares–the Vandermonde matrix–and the ubiquitous Bernoulli polynomials and their elegant escorts, the reciprocal polynomials.

*Related stuff and other trails in combinatorics*:

1) Counting faces of polytopes by Lee. (The varying definitions and uses of the terms h–vector and h-polynomial–and even f-polynomial–in the vast literature are confusing. If we take the h-polynomial to be algebraically defined as with the f-polynomials, , described above, then for the hypertriangles, we obtain and for the hypercubes, .)

4) Goin’ with the Flow: Logarithm of the Derivative

5) OEIS A074909

6) OEIS A135278

7) GeneratingFunctionology by Wilf

8) Analytic Combinatorics by Flajolet and Sedgwick

9) Enumerative Combinatorics Vol. I by Stanley

10) Algebraic and geometric methods in enumerative combinatorics by Ardila

11) A species approach to Rota’s twelvefold way by Claesson

12) Combinatorial Species–Bergeron

13) Computing the Discrete Continuously by Beck and Robins

14) Trianglations of Point Sets by De Loera, Rambau, Santos

15) Ch. 10: Topology Grows into a Branch of Mathematics in the book *Never a Dull Moment*: Hassler Whitney–Mathematics Pioneer by Kendig

16) Merging/identifying opposing facets of the -hypercube gives us the -torus, and truncating facets of the hypertriangles or hypercubes gives us permutahedra and associahedra and paths to other important and exciting fields of modern research in math and quantum physics.

]]>Some background and refs are given in the body and comments of the MathOverflow question “Expansions of iterated, or nested, derivatives, or vectors–conjectured matrix computation.” And another proof was added on Oct. 14, 2019.

The exponentiation and resultant partition polynomials are central to unveiling the relationships among compositional inversion via series; the differential geometry of vector fields; solutions of evolution equations, including the inviscid Burgers’ equation and the soliton solution of the KdV equation; the enumeratuve combinatorics of analytic rooted Cayley trees, Dyck paths, the associahedra, dissections of polygons, noncrossing partitions, phylogenetic trees, among other combinatorial geometric constructs; algebraic geometry and certain moduli spaces; generalized permutahedra, Hopf algebra of monoids, and optimization (see Aguiar and Ardila ref) ; free cumulants and their associated moments in free probability; enumerative combinatorics of iterated convolutions associated to the Hirzebruch criterion for the Todd class; Koszul duality and quadratic operads; and pre-Lie algebras and Butcher series (also this post).

]]>Consider the generalized shift operator acting on the basic integral powers of as the following differential operator with and :

Then also

and we can identify the two reps, related through a binomial transform, when acting on polynomials; i.e.,

When , the generalized shift operator acting on a function analytic at and , gives the Taylor series of the function about those points:

which will evaluate the same at a point as long as the two circles of convergence overlap at that point, i.e., when both series are convergent at that point.

A particularly enlightening example is for non-integral. Then gives divergent derivatives at some order, but

a finite difference interpolation that can be taken as the value of when convergent. Note that , a positive integer, gives results consistent with the prior discussion. As a sanity check, try and

This result is also consistent with formal interpolation via the Mellin Transform:

A similar duality of differential ops involving the binomial transform exists for umbral scaling op reps

where

Acting on the exponential, we obtain the binomial transform for an exponential generating function (e.g.f.):

Taking the Borel-Laplace transform of the two equivalent e.g.f.s generates the analogous binomial transform for the corresponding ordinary generating functions (o.g.f.s):

In contrast to the binomial transformation of the e.g.f.s, for which both expressions are convergent or divergent together, one of the expressions for the o.g.f. may be divergent while another is convergent.

]]>

Fig. 1

(A and B) Complementary [G3NO3]2+ (yellow) and HSPB6–(green) tiles, with their corresponding edge lengths, defined by the distance spanned by neighboring guanidinium and sulfonate ions, respectively. (C) Schematic representation of an unfolded q-TO based on the complementary [G3NO3]2+(yellow) and HSPB6– (green) tiles, illustrating the edge-shared N-H···O-S hydrogen bonds. (D) The q-TO. The open squares in (C) and (D) correspond to the openings on the surface of the q-TO that enable the formation of channels between adjacent q-TOs in the solid state.

From “Supramolecular Archimedean Cages Assembled with 72 Hydrogen Bonds” by Yuzhou Liu, Chunhua Hu, Angiolina Comotti, and Michael D. Ward, Science, 22 Jul 2011, Vol. 333, Issue 6041, pp. 436-440.

Fig. 2 A vortex field on Saturn

]]>This is a temporary pedagogical post of an elementary computation of a centroid required in an application to a potential employer.

**Problem**:

Compute the center of mass (CM), or centroid, of the region (R) of uniform mass density enclosed by the quadratic curves

and

**Solution**:

The curves are partially presented in the graph above with , from to , the upper boundary of R and the lower boundary.

The vertical distance between the boundary curves as a function of is

As a sanity check, this equation confirms that the curves intersect at the points and where vanishes.

The midpoint of the vertical line segments from the lower to the upper boundary is given as a function of as

(Try another simple sanity check at the endpoints of R.)

The total area of R can be obtained by summing non-overlapping rectangles of extremely small width and vertical length that cover R, which gives in the limit

We’ve characterized R well enough now to calculate its CM, but first by inspection, we expect we could balance R on the tip of our index finger by placing it around the point .

To motivate the computation of the coordinate of CM, assume we have only two vertical rulers of length and of uniform mass density and width vertcally balanced to the right of the origin at and , respectively, on a seesaw extended along the x-axis and centered at the origin. The weights of the two rulers are for and , respectively.

To balance the seesaw, we can place a weight equal to

on the seesaw to the left of origin at a distance such that

so

With a seesaw of width much larger than the extension of the rulers, we can simply move the rulers in the vertical direction without changing the balance. The two rulers can then be replaced by a single mass on the seesaw at a distance to the right of the origin and the seesaw will remain balanced. This defines the coordinate of the CM of the system.

Analogously, for the region R, we can compute the coordinate of the CM as

The computation of the coordinate of the CM, , of R can be viewed in a similar manner by pushing the vertical rectangles/rulers to the left onto a seesaw extended along the y-axis and centered at the origin. Each ruler of length has its own CM at its midpoint and weight . This motivates the integral computation

So finally we see that the calculated CM agrees with our initial guess

]]>