Generalized Compounding and Growth Optimal Portfolios

notes
risk
mathematics
Generalized compounding: dethroning the logarithm function using business to calendar time decoding.
Author

Stephen J. Mildenhall

Published

2026-04-10

Modified

2026-04-14

Peter Carr and Umberto Cherubini’s 2022 paper, Generalized Compounding and Growth Optimal Portfolios: Reconciling Kelly and Samuelson [1], explains that the logarithm does not have a special role in computing average compounded growth rates. Its apparently special role in long-run growth comes from the compounding rule we assume. Change the way time enters returns, and a different transform may become the correct object to average and maximize. The paper proposes that when market activity is governed by a stochastic clock, the relevant transform is no longer the ordinary logarithm but the inverse of the clock’s moment generating function. That is a beautiful insight, and it deserves to be right.

However, I do not think the general derivation in Section 4.1 is correct as written. There is, however, a natural way to repair it. The repair is narrower than the paper’s general claim, but it preserves the central philosophy. Under a suitable assumption on the business-time dynamics, the argument goes through cleanly and yields exactly the kind of generalized compounding law the paper wants.

1 Background and notation

The paper works with two notions of time. Calendar time is the ordinary passage of dates on the clock. Business time, also called operational time, measures the intensity of market activity. A stochastic clock connects the two. Over \(n\) calendar periods, let \[ \tau_n = \sum_{j=1}^n Z_j, \] where the increments \(Z_j\) are positive, independent, identically distributed random variables. The random variable \(Z_j\) is the amount of business time that elapses during calendar period \(j\).

Let \(V(\tau)\) denote the asset or portfolio value viewed in business time. Calendar-time wealth is obtained by subordinating \(V\) to the clock: \[ W_i = V(\tau_i), \qquad \tau_i = \sum_{j=1}^i Z_j. \] The gross return in calendar period \(i\) is therefore \[ \frac{W_i}{W_{i-1}} = \frac{V(\tau_i)}{V(\tau_{i-1})}. \]

The paper assumes that, conditional on the realized clock path, these gross returns are independent across periods. It then introduces a continuously compounded return associated with the conditional mean gross return in each period. That is where the logarithm first enters. At that stage, the logarithm is doing the familiar job it always does: converting multiplicative gross returns into additive continuously compounded returns in business time.

To keep notation clean, it helps to distinguish three layers:

  1. \(V(\tau)\): value evolving in business time;
  2. \(W_i = V(\tau_i)\): the same value observed at calendar dates;
  3. \(\overline G_i\): the conditional mean gross return in calendar period \(i\), given the clock.

The key question is then: once we average over the random clock, what transform recovers the additive growth rate that is natural in business time? The Carr–Cherubini answer is: not the ordinary logarithm in general, but the inverse of the clock MGF.

2 The conditional mean gross return \(\overline G_i\)

Let \(\mathcal F^\tau = \sigma(Z_1,\dots,Z_n)\) be the sigma-field generated by the clock increments over the full horizon, or equivalently by the realized clock path. Then the one-period conditional mean gross return in calendar period \(i\) is \[ \begin{aligned} \overline G_i :&= \mathsf P\left[\frac{W_i}{W_{i-1}} \,\middle|\, \mathcal F^\tau \right] \\ &=\mathsf P\left[\frac{V(\tau_i)}{V(\tau_{i-1})} \,\middle|\, \mathcal F^\tau \right]. \end{aligned} \] Since \[ \tau_i-\tau_{i-1}=Z_i, \] we can also write \[ \overline G_i = \mathsf P\left[\frac{V(\tau_{i-1}+Z_i)}{V(\tau_{i-1})} \,\middle|\, \mathcal F^\tau \right]. \]

Under stationary business-time dynamics, the conditional law of the increment depends only on the elapsed business time \(Z_i\), not on the starting point \(\tau_{i-1}\). In that case there is a deterministic function \(g\) such that \[ \overline G_i = g(Z_i). \] The repaired argument requires the stronger homogeneous condition \[ \overline G_i = e^{k Z_i} \] for a common deterministic constant \(k\), equivalently \[ \ln \overline G_i = k Z_i. \]

With that notation in place, the paper’s definition becomes \[ s_i = \frac{n}{\tau_n}\ln \overline G_i, \qquad \bar s = \frac1n \sum_{i=1}^n s_i. \] Thus \(\overline G_i\) equals the mean gross return over calendar period \(i\), conditional on the realized amount of business time that passed during that period.

3 The gap in the general derivation

In Section 4.1 the paper defines \[ s_i = \frac{n}{\tau_n}\ln \overline{G}_i, \qquad \bar s = \frac1n \sum_{i=1}^n s_i. \]

The key deconditioning step is \[ \mathsf P\left[\exp(\bar s \tau_n)\right] = \mathsf P\left[\exp\left(\bar s \sum_{i=1}^n Z_i\right)\right] = \prod_{i=1}^n \mathsf P\left[\exp(\bar s Z_i)\right], \] but that factorization is not valid in general. The reason is simple: \(\bar s\) is itself random. It depends on the same clock increments \(\{Z_1,\dots,Z_n\}\) that appear in the exponent. Independence of the \(Z_i\) does not justify pulling the expectation apart when the common coefficient \(\bar s\) is a function of those same variables. This is a mathematical gap. It does not mean the main idea of the paper is wrong but it does mean the proof needs an extra assumption.

4 A Repair

One possible repair is to assume that the conditional expected log-return in business time is linear in the realized business-time increment, with the same deterministic slope in every period. In symbols, \[ \ln \overline{G}_i = k Z_i \] for a deterministic constant \(k\).

Under that assumption, \[ s_i = \frac{n}{\tau_n} k Z_i, \] and therefore \[ \begin{aligned} \bar s &= \frac1n \sum_{i=1}^n s_i \\ &= \frac1n \sum_{i=1}^n \frac{n}{\tau_n} k Z_i \\ &= \frac{k}{\tau_n}\sum_{i=1}^n Z_i \\ &= k. \end{aligned} \] The randomness cancels. The average \(\bar s\) is no longer path-dependent; it collapses to the deterministic constant \(k\). Once that happens, the disputed factorization becomes correct: \[ \mathsf P\left[\exp\left(k\sum_{i=1}^n Z_i\right)\right] = \prod_{i=1}^n \mathsf P[e^{k Z_i}] = \psi(k)^n, \] where \(\psi(s)=\mathsf P[e^{sZ_1}]\) is the moment generating function of a single clock increment.

Taking \(n\)th roots gives \[ \left(\mathsf P\frac{W_n}{W_0}\right)^{1/n} = \psi(k). \] If we denote the observable mean gross return in calendar time by \(\overline R\), then \[ \overline R = \psi(k), \qquad k = \psi^{-1}(\overline R). \] The function \(\psi^{-1}\) is the generalized logarithm. It appears because calendar time hides the random business clock, and the inverse MGF is the transform that recovers the additive growth rate in business time.

Let’s reiterate, since this is the key insight. In business time, growth is ordinary geometric compounding. The natural logarithm extracts the additive growth rate \(\ln \overline G_i = k Z_i\). But investors observe prices in calendar time, not business time. Once the random clock is hidden, the observable mean gross return over one calendar period is \[ \overline R = \mathsf P[e^{kZ}] = \psi(k). \] At that point the ordinary logarithm is no longer the right inverse. To recover the additive business-time growth rate from the calendar-time return, we must apply the inverse MGF: \[ \psi^{-1}(\overline R)=k. \] That is the real content of generalized compounding. The inverse MGF is not a preference functional. It is the calendar-time transform that recovers the additive growth rate operating in business time. When the clock is deterministic, \(\psi(s)=e^s\), so \(\psi^{-1}=\ln\) and we are back in the classical Kelly world.

5 Scope of the repair

The repair is narrower than the original Section 4.1 claim. It does not validate the full general derivation for arbitrary period-by-period conditional means \(\overline G_i\). It works when \(\ln \overline G_i = k Z_i\) with the same deterministic constant \(k\) in every period. If instead \(\ln \overline G_i = k_i Z_i\) with deterministic but varying \(k_i\), then \[ \bar s = \frac{\sum_i k_i Z_i}{\tau_n} \] is generally still random. In that case the original paper’s reduction to a single deterministic \(\bar s\) fails again.

The repaired theorem supports a homogeneous setting: a fixed portfolio, stationary business-time dynamics, and a clock whose increments are iid and possess the relevant exponential moment. That is still a substantial and interesting result. It is just not the full general statement suggested by the original derivation.

6 Why Brownian motion in business time works

A simple and important example is geometric Brownian motion run on business time. Suppose log-price evolves as \[ X(\tau)=\mu \tau + \sigma B(\tau). \] Over a realized business-time increment \(Z_i\), \[ \Delta X_i \mid Z_i \sim N(\mu Z_i,\sigma^2 Z_i), \] so the conditional mean gross return is \[ \overline G_i = \mathsf P[e^{\Delta X_i}\mid Z_i] = e^{(\mu+\sigma^2/2)Z_i}. \] Hence \[ \ln \overline G_i = k Z_i, \qquad k=\mu+\sigma^2/2. \] This is exactly the assumption needed above. The repair therefore holds for Brownian motion in business time.

In fact the same structure holds more generally for exponential Levy models in business time. If the log-return process has stationary independent increments and finite exponential moment at 1, then the conditional mean gross return over business time \(z\) takes the form \[ \mathsf P[e^{X(z)}] = e^{\kappa(1)z}, \] so again \(\ln \overline G_i = k Z_i\) with \(k=\kappa(1)\). Brownian motion is only the simplest case.

7 Counter-example

Consider an \(n=2\) period model with distinct continuously compounded returns \(r_1\) and \(r_2\), with \(r_1 \ne r_2\). Use operational time and set \[ W_0 = 1, \quad W_1 = \exp(r_1 Z_1),\quad W_2=\exp(r_1Z_1 + r_2 Z_2) \] with \(Z_i\) identically distributed gamma variables with mean 1 and variance \(\nu\). The moment generating function of \(Z\) equals \[ \phi(r):=\mathsf E[e^{rZ}]=(1-\nu r)^{-1/\nu}. \] Set \(\tau_n = Z_1 + Z_2\). Note that the one period returns are \(\phi(r_1)\) and \(\phi(r_2)\).

This example is not outside the scope of Carr–Cherubini’s Section 4.1. They explicitly allow period-by-period heterogeneity in the pre-subordinator return process:

We assume that value ratios \(V_i/V_{i-1}\) are independent of each other. We do not require that the value ratios \(V_i/V_{i-1}\) be identically distributed. The \(V\) process is a positive discrete-time multiplicative random walk.

After introducing the stochastic clock, they further note that, conditional on the clock, the wealth increments remain independent but are generally not identically distributed.

A two-period example with distinct continuously compounded returns \(r_1\) and \(r_2\) therefore tests the generality claimed in the paper itself. The point of the example is precisely that once the period-specific coefficients differ, the random average \(\bar s\) need not collapse to a constant, and the deconditioning step in equation (18) no longer follows.

Continuing, we want to define returns \(\bar s\) and \(s_i\) so that \[ \bar s\tau_n = r_1 Z_1+ r_2Z_2,\quad \bar s = \frac{s_1 + s_2}{2}. \] Note that \(\bar s, s_i\) are random because they depend on \(Z\) even though \(r_i\) are fixed. Following Eq. 14, 17 in the paper, we can do this by setting \[ s_i = \frac{n}{\tau_n} r_i Z_i = \frac{n}{\tau_n} \log(W_i/W_{i-1}). \] Then (Eq. 17), since \(n=2\) \[ \bar s = \frac{r_1 Z_1+ r_2Z_2}{\tau_n} = \frac{s_1 + s_2}{2}. \] If the \(Z_i\) are exchangeable, then by symmetry \(\mathsf E[s_i]=nr_i\mathsf E[Z_i/\tau_n]=r_i\).

Now we can look at the derivation in Eq. 18. It starts \[ \begin{align} \mathsf E[W_2] :&= \mathsf E[\exp(r_1 Z_1 + r_2 Z_2)] \\ &= \mathsf E[\exp(\bar s \tau_2)] \\ &= \mathsf E[\exp(\bar s (Z_1 + Z_2))] \end{align} \] Assume the \(Z_i\) are independent. From the first line, we can use independence to write \[ \begin{align} \mathsf E[W_2] :&= \mathsf E[\exp(r_1 Z_1 + r_2 Z_2)] \\ &= \mathsf E[\exp(r_1 Z_1 )]\mathsf E[\exp(r_2 Z_2 )] \\ &= \phi(r_1)\phi(r_2). \end{align} \] If we want to define \(\tilde r\) so that \(\phi(\tilde r)^2 = \phi(r_1)\phi(r_2)\), then from the known form of \(\phi\) we must have \[ \tilde r = \frac{1 - ((1-\nu r_1)(1-\nu r_2))^{1/2}}{\nu} \] The paper argues from the second line \[ \begin{align} \mathsf E[W_2] &= \mathsf E[\exp(\bar s \tau_2)] \\ &= \mathsf E[\exp(\bar s( Z_1 + Z_2))] \\ !&= \prod_i \mathsf E[\exp(\bar s Z_i)] \\ &= \prod_i \mathsf E[\exp(\bar s Z_1)] \\ &= \mathsf \phi(\bar s)^n. \end{align} \] However, \(\bar s Z_i\) are not independent because \(\bar s=\bar s(Z_1,Z_2)\) depends on the same clock increments.

8 Examples

All of the examples below require the clock increment \(Z\) to have a finite MGF at the relevant value of \(k\).

8.1 Variance Gamma clock

If \(Z\) is Gamma with mean \(1\) and variance \(\theta\), its MGF is \[ \psi(s) = (1-\theta s)^{-1/\theta}. \] The inverse is \[ \psi^{-1}(R)=\frac{1-R^{-\theta}}{\theta}. \] This is the paper’s power-type generalized logarithm, with parameterization \(\gamma=-\theta\). As \(\theta \to 0\), we recover \[ \psi^{-1}(R)\to \ln R. \]

8.2 Normal Inverse Gaussian clock

If \(Z\) is inverse Gaussian with mean \(1\) and variance \(1/\theta\), then \[ \psi(s)=\exp\!\left(\theta\left(1-\sqrt{1-\frac{2s}{\theta}}\right)\right). \] Solving for \(s\) gives \[ \psi^{-1}(R)=\ln R - \frac{(\ln R)^2}{2\theta}. \] This is the quadratic log transform highlighted in the paper, formally analogous to a mean-variance objective.

8.3 Poisson clock, normalized to unit mean

To compare fairly with deterministic calendar time, it is best to normalize the clock to mean \(1\). Let \(N \sim \mathrm{Poisson}(\lambda)\) and set \[ Z=\frac{N}{\lambda}. \] Then \(\mathsf P Z = 1\) and \[ \psi(s)=\mathsf P[e^{sZ}] = \exp\!\left(\lambda\left(e^{s/\lambda}-1\right)\right). \] The inverse is \[ \psi^{-1}(R) = \lambda \ln\!\left(1+\frac{\ln R}{\lambda}\right). \] As \(\lambda \to \infty\), the normalized clock concentrates at \(1\), and \[ \psi^{-1}(R)\to \ln R. \] Standard Kelly reappears in the high-intensity, low-variance limit.

8.4 Compound Poisson clock, normalized to unit mean

Let \[ Z=\frac1\lambda \sum_{m=1}^N Y_m, \] where \(N\sim\mathrm{Poisson}(\lambda)\) and the \(Y_m\) are iid Exponential\((1)\), so each jump has mean \(1\). Then \(\mathsf P Z = 1\), and \[ \psi(s) = \exp\!\left(\lambda\left(\frac{1}{1-s/\lambda}-1\right)\right) = \exp\!\left(\frac{s}{1-s/\lambda}\right). \] Writing \(y=\ln R\), we solve \[ y=\frac{s}{1-s/\lambda} \qquad\Longrightarrow\qquad s=\frac{\lambda y}{\lambda+y}. \] Hence \[ \psi^{-1}(R)=\frac{\lambda \ln R}{\lambda+\ln R}. \] This gives a bounded transform of log-wealth. It is another example of how the clock, not a utility function, determines the observable-time averaging rule.

9 Conclusion

The original derivation in Section 4.1 is too general. The deconditioning step requires more structure than the paper states. But the central idea survives. If the conditional expected log-return is linear in business time with a common deterministic slope, then the random average collapses to a constant, the factorization becomes valid, and the inverse MGF of the clock is exactly the right generalized logarithm in calendar time.

That repaired result is narrower than the published theorem, but it is still elegant. It preserves the most interesting message of the paper: long-run growth is not tied forever to the ordinary logarithm. The correct transform depends on how market time arrives.

References

1.
Carr, P.P., Cherubini, U.: Generalized Compounding and Growth Optimal Portfolios Reconciling Kelly and Samuelson. In: Peter Carr Memorical Conference. pp. 74–93 (2022)