Generalized Compounding and Growth Optimal Portfolios V2

notes
risk
mathematics
Clarifying the assumption needed for generalized compounding to work: conditional expected log-return must scale strictly linearly with the business time increment.
Author

Stephen J. Mildenhall

Published

2026-04-10

Modified

2026-04-11

Peter Carr and Umberto Cherubini’s 2022 paper, Generalized Compounding and Growth Optimal Portfolios: Reconciling Kelly and Samuelson [1], explains that the logarithm is not special. Its apparently special role in long-run growth comes from the compounding rule we assume. Change the way time enters returns, and a different transform may become the correct object to average and maximize. The paper proposes that when market activity is governed by a stochastic clock, the relevant transform is no longer the ordinary logarithm but the inverse of the clock’s moment generating function. That is a beautiful insight, and it deserves to be right.

I do not think the general derivation in Section 4.1 is correct as written. There is, however, a natural way to repair it. The repair is narrower than the paper’s general claim, but it preserves the central philosophy. Under a suitable assumption on the business-time dynamics, the argument goes through cleanly and yields exactly the kind of generalized compounding law the paper wants.

The gap in the general derivation

Let the cumulative business clock be \[ \tau_n = \sum_{j=1}^n Z_j, \] where the \(Z_j\) are independent, identically distributed clock increments. In Section 4.1 the paper defines \[ s_i = \frac{n}{\tau_n}\ln \overline{G}_i, \qquad \bar s = \frac1n \sum_{i=1}^n s_i, \] where \(\overline{G}_i\) is the conditional mean gross return in period \(i\) given the clock.

The key deconditioning step is \[ \mathsf P\left[\exp(\bar s \tau_n)\right] = \mathsf P\left[\exp\left(\bar s \sum_{i=1}^n Z_i\right)\right] = \prod_{i=1}^n \mathsf P\left[\exp(\bar s Z_i)\right], \] but that factorization is not valid in general. The reason is simple: \(\bar s\) is itself random. It depends on the same clock increments \(\{Z_1,\dots,Z_n\}\) that appear in the exponent. Independence of the \(Z_i\) does not justify pulling the expectation apart when the common coefficient \(\bar s\) is a function of those same variables. This is a mathematical gap. It does not mean the main idea of the paper is wrong. It means the proof needs an extra assumption.

The right repair

One possible repair is to assume that the conditional expected log-return in business time is linear in the realized business-time increment, with the same deterministic slope in every period. In symbols, \[ \ln \overline{G}_i = k Z_i \] for a deterministic constant \(k\).

Under that assumption, \[ s_i = \frac{n}{\tau_n} k Z_i, \] and therefore \[ \begin{aligned} \bar s &= \frac1n \sum_{i=1}^n s_i \\ &= \frac1n \sum_{i=1}^n \frac{n}{\tau_n} k Z_i \\ &= \frac{k}{\tau_n}\sum_{i=1}^n Z_i \\ &= k. \end{aligned} \] The randomness cancels. The average \(\bar s\) is no longer path-dependent; it collapses to the deterministic constant \(k\). Once that happens, the disputed factorization becomes rigorous: \[ \mathsf P\left[\exp\left(k\sum_{i=1}^n Z_i\right)\right] = \prod_{i=1}^n \mathsf P[e^{k Z_i}] = \psi(k)^n, \] where \(\psi(s)=\mathsf P[e^{sZ_1}]\) is the moment generating function of a single clock increment.

Taking \(n\)th roots gives \[ \left(\mathsf P\frac{W_n}{W_0}\right)^{1/n} = \psi(k). \] If we denote the observable mean gross return in calendar time by \(\overline R\), then \[ \overline R = \psi(k), \qquad k = \psi^{-1}(\overline R). \] The function \(\psi^{-1}\) is the generalized logarithm. It is not introduced by fiat. It appears because calendar time hides the random business clock, and the inverse MGF is the transform that recovers the additive growth rate in business time.

What the repair does and does not prove

The repair is real, but it is narrower than the original Section 4.1 claim. It does not validate the full general derivation for arbitrary period-by-period conditional means \(\overline G_i\). It works when \[ \ln \overline G_i = k Z_i \] with the same deterministic constant \(k\) in every period. If instead \[ \ln \overline G_i = k_i Z_i \] with deterministic but varying \(k_i\), then \[ \bar s = \frac{\sum_i k_i Z_i}{\tau_n}, \] which is generally still random. In that case the original paper’s reduction to a single deterministic \(\bar s\) fails again.

The repaired theorem supports a homogeneous setting: a fixed portfolio, stationary business-time dynamics, and a clock whose increments are iid and possess the relevant exponential moment. That is still a substantial and interesting result. It is just not the full general statement suggested by the original derivation.

Why Brownian motion in business time works

A simple and important example is geometric Brownian motion run on business time. Suppose log-price evolves as \[ X(\tau)=\mu \tau + \sigma B(\tau). \] Over a realized business-time increment \(Z_i\), \[ \Delta X_i \mid Z_i \sim N(\mu Z_i,\sigma^2 Z_i), \] so the conditional mean gross return is \[ \overline G_i = \mathsf P[e^{\Delta X_i}\mid Z_i] = e^{(\mu+\sigma^2/2)Z_i}. \] Hence \[ \ln \overline G_i = k Z_i, \qquad k=\mu+\sigma^2/2. \] This is exactly the assumption needed above. The repair therefore holds for Brownian motion in business time.

In fact the same structure holds more generally for exponential Levy models in business time. If the log-return process has stationary independent increments and finite exponential moment at 1, then the conditional mean gross return over business time \(z\) takes the form \[ \mathsf P[e^{X(z)}] = e^{\kappa(1)z}, \] so again \(\ln \overline G_i = k Z_i\) with \(k=\kappa(1)\). Brownian motion is only the simplest case.

The repair is genuinely narrower than the original derivation. It requires a common deterministic coefficient in business time: \[ \ln \overline G_i = k Z_i. \] If instead \[ \ln \overline G_i = k_i Z_i \] with coefficients varying by period, then \[ \bar s = \frac{\sum_i k_i Z_i}{\tau_n}, \] which is still random, so the deconditioning step fails again.

Example 1 Consider an \(n=2\) period model with continuously compounded returns \(r_1,r_2\) in each period. Use operational time and set \[ W_0 = 1, \quad W_1 = \exp(r_1 Z_1),\quad W_2=\exp(r_1Z_1 + r_2 Z_2) \] with \(Z_i\) identically distributed gamma variables with mean 1 and variance \(\nu\). The MGF equals \[ \phi(r):=\mathsf E[e^{rZ}]=(1-\nu r)^{-1/\nu}. \] Set \(\tau_n = Z_1 + Z_2\). Note that the one period returns are \(\phi(r_1)\) and \(\phi(r_2)\).

We want to define returns \(\bar s\) and \(s_i\) so that \[ \bar s\tau_n = r_1 Z_1+ r_2Z_2,\quad \bar s = \frac{s_1 + s_2}{2}. \] Note that \(s, s_i\) are random because they depend on \(Z\) even though \(r_i\) are fixed. Following Eq. 14, 17 in the paper, we can do this by setting \[ s_i = \frac{n}{\tau_n} r_i Z_i = \frac{n}{\tau_n} \log(W_i/W_{i-1}). \] Then (Eq. 17), since \(n=2\) \[ \bar s = \frac{r_1 Z_1+ r_2Z_2}{\tau_n} = \frac{s_1 + s_2}{2}. \] If the \(Z_i\) are exchangeable, then by symmetry \(\mathsf E[s_i]=nr_i\mathsf E[Z_i/\tau_n]=r_i\).

Now we can look at the derivation in Eq. 18. It starts \[ \begin{align} \mathsf E[W_2] :&= \mathsf E[\exp(r_1 Z_1 + r_2 Z_2)] \\ &= \mathsf E[\exp(\bar s \tau_2)] \\ &= \mathsf E[\exp(\bar s (Z_1 + Z_2))] \end{align} \] Assume the \(Z_i\) are independent. From the first line, we can argue by iid that \[ \begin{align} \mathsf E[W_2] :&= \mathsf E[\exp(r_1 Z_1 + r_2 Z_2)] \\ &= \mathsf E[\exp(r_1 Z_1 )]\mathsf E[\exp(r_1 Z_1 )] \\ &= \phi(r_1)\phi(r_2). \end{align} \] If we want to define \(\tilde r\) so that \(\phi(\tilde r)^2 = \phi(r_1)\phi(r_2)\), then from the known form of \(\phi\) we must have \[ \tilde r = \frac{1 - ((1-\nu r_1)(1-\nu r_2))^{1/2}}{\nu} \] The paper argues from the second line \[ \begin{align} \mathsf E[W_2] &= \mathsf E[\exp(\bar s \tau_2)] \\ &= \mathsf E[\exp(\bar s( Z_1 + Z_2)] \\ !&= \prod_i \mathsf E[\exp(\bar s Z_i)] \\ &= \prod_i \mathsf E[\exp(\bar s Z_1)] \\ &= \mathsf \phi(\bar s)^n. \end{align} \] However, \(sZ_i\) are not independent because \(s=s(Z_i)\).

The translation from business time to calendar time

In business time, growth is ordinary geometric compounding. The natural logarithm extracts the additive growth rate: \[ \ln \overline G_i = k Z_i. \] But investors observe prices in calendar time, not business time. Once the random clock is hidden, the observable mean gross return over one calendar period is \[ \overline R = \mathsf P[e^{kZ}] = \psi(k). \] At that point the ordinary logarithm is no longer the right inverse. To recover the additive business-time growth rate from the calendar-time return, we must apply the inverse MGF: \[ \psi^{-1}(\overline R)=k. \] That is the real content of generalized compounding. The inverse MGF is not a preference functional. It is the calendar-time transform that recovers the additive growth rate operating in business time. When the clock is deterministic, \(\psi(s)=e^s\), so \(\psi^{-1}=\ln\) and we are back in the classical Kelly world.

Examples

All of the examples below require the clock increment \(Z\) to have a finite MGF at the relevant value of \(k\).

1. Variance Gamma clock

If \(Z\) is Gamma with mean \(1\) and variance \(\theta\), its MGF is \[ \psi(s) = (1-\theta s)^{-1/\theta}. \] The inverse is \[ \psi^{-1}(R)=\frac{1-R^{-\theta}}{\theta}. \] This is the paper’s power-type generalized logarithm, with parameterization \(\gamma=-\theta\). As \(\theta \to 0\), we recover \[ \psi^{-1}(R)\to \ln R. \]

2. Normal Inverse Gaussian clock

If \(Z\) is inverse Gaussian with mean \(1\) and variance \(1/\theta\), then \[ \psi(s)=\exp\!\left(\theta\left(1-\sqrt{1-\frac{2s}{\theta}}\right)\right). \] Solving for \(s\) gives \[ \psi^{-1}(R)=\ln R - \frac{(\ln R)^2}{2\theta}. \] This is the quadratic log transform highlighted in the paper, formally analogous to a mean-variance objective.

3. Poisson clock, normalized to unit mean

To compare fairly with deterministic calendar time, it is best to normalize the clock to mean \(1\). Let \(N \sim \mathrm{Poisson}(\lambda)\) and set \[ Z=\frac{N}{\lambda}. \] Then \(\mathsf P Z = 1\) and \[ \psi(s)=\mathsf P[e^{sZ}] = \exp\!\left(\lambda\left(e^{s/\lambda}-1\right)\right). \] The inverse is \[ \psi^{-1}(R) = \lambda \ln\!\left(1+\frac{\ln R}{\lambda}\right). \] As \(\lambda \to \infty\), the normalized clock concentrates at \(1\), and \[ \psi^{-1}(R)\to \ln R. \] So standard Kelly reappears in the high-intensity, low-variance limit.

4. Compound Poisson clock, normalized to unit mean

Let \[ Z=\frac1\lambda \sum_{m=1}^N Y_m, \] where \(N\sim\mathrm{Poisson}(\lambda)\) and the \(Y_m\) are iid Exponential\((1)\), so each jump has mean \(1\). Then \(\mathsf P Z = 1\), and \[ \psi(s) = \exp\!\left(\lambda\left(\frac{1}{1-s/\lambda}-1\right)\right) = \exp\!\left(\frac{s}{1-s/\lambda}\right). \] Writing \(y=\ln R\), we solve \[ y=\frac{s}{1-s/\lambda} \qquad\Longrightarrow\qquad s=\frac{\lambda y}{\lambda+y}. \] Hence \[ \psi^{-1}(R)=\frac{\lambda \ln R}{\lambda+\ln R}. \] This gives a bounded transform of log-wealth. It is another example of how the clock, not a utility function, determines the observable-time averaging rule.

Conclusion

The original derivation in Section 4.1 is too general. The deconditioning step requires more structure than the paper states. But the central idea survives. If the conditional expected log-return is linear in business time with a common deterministic slope, then the random average collapses to a constant, the factorization becomes valid, and the inverse MGF of the clock is exactly the right generalized logarithm in calendar time.

That repaired result is narrower than the published theorem, but it is still elegant. It preserves the most interesting message of the paper: long-run growth is not tied forever to the ordinary logarithm. The correct transform depends on how market time arrives.

References

1.
Carr, P.P., Cherubini, U.: Generalized Compounding and Growth Optimal Portfolios Reconciling Kelly and Samuelson. In: Peter Carr Memorical Conference. pp. 74–93 (2022)