Non Bayesian Risk
Non-Bayesian Risk
Distortion risk measures
When can a risk preference be represented by a certain functional form? EU, SEU, CEU, Savage etc.
DRM has a single \(X\) evaluated with a single non-additive prob \(\nu\) which can be considered as a range of \(\mathsf{Q}\sim \mathsf{P}\). But in any given situation there is no uncertainty about which \(\mathsf Q\) the DM will use (assuming \(X\) is increasing). It is not a random selection. It is not Bayesiable in the sense that it has a different LLN - per Marinacci papers.
Bayes has \(X=X\mid\theta\), one of a family of RVs depending on an (unknown) parameter \(\theta\). There is a single \(\mathsf P\). Samples from \(X\) converge on average to \(\mathsf{E}[X]\). Samples from \(X\mid\theta\) converge to \(\mathsf{E}[X\mid\theta]\).
[1]: comon functionals are Choquet integrals.
[2]
[3]: discrete setting version of next paper.
[4]: there is a “lifting” of a non-additive prob to a larger space and a measure. Cf Stone spaces? (See also [5]) Expresses non-additive probs interms of unamnity games = VaR!!
[6]
[7]
[8]
[9]: IID: Independently and indistinguishably distributed, includes Marinacci LLN result.
[10]
[11]
[12]: it can be rational to ignore Bayes. Making up probs and then acting on them is not rational. * Grand state space * Pick a prior * Bayesian Updating * Utility for DM
In CS, Stats, ML: “small” state space (e.g., a single parameter) and no utility. More constrained.
(Book)
[16]
[17]
[18]
[19] more Marinacci like results.
[15]
a mode of behavior is irrational for a decision-maker, if, when the latter is exposed to the analysis of her choices, she would have liked to change her decision, or to make different choices in similar future circumstance. Note that this definition is based on a sense of regret, The analysis used for this test should not include new factual information.
We thus refine the notion of rationality as follows: a decision is subjectively rational for a decision-maker if she cannot be convinced that this decision is wrong; a decision is objectively rational for a decision-maker if she can be convinced that this decision is right. For a choice to be subjectively rational, it should be defensible once made; to be objectively rational, it needs to able to beat other possible choices.
Bayesian updating is generally considered to be the only rational approach to learning once one has a prior. The normative appeal of Bayes’s formula has only rarely been challenged, partly because the formula says very little: it only suggests to ignore that which is known not to be the case. Following the empiricist principle of avoiding direct arguments with facts, Bayesian updating only suggests that probabilities be renormalized to retain the convention that they sum up to unity.
The notion of a ‘state of the world’ varies across disciplines and applications. In Bayesian statistics, as described above, the state may be the unknown parameter, coupled with the observations of the experiment. Should one ask, where would I get a prior over the state space, the answer might well be, experience. If the entire statistical problem has been encountered in the past, one may have some idea about a reasonable prior belief, or at least a class of distributions that one may select a prior from.
These extensions of the notion of a state of the world are very elegant, and are sometimes necessary to deal with conceptual problems. Moreover, one may always define the state space in a way that each state would provide answers to all relevant questions. But defining a prior over such a state space becomes a very challenging task, especially if one wishes this definition not to be arbitrary. The more informative are the states, the larger is the space, and, at the same time, the less information one has for the formation of a prior. If, for example, one starts with a parameter of a coin, \(p\), one has to form a prior over the interval \([0, 1]\) and one may hope to have observed problems with coins that would provide a hint about the selection of an appropriate prior in the problem at hand. But if these past problems are now part of the description of a state, there are many more states, as each describes an entire sequence of inference problems. The prior should now be defined at the beginning of time, before the first of these problems has been encountered. Worse still, the choice of the prior over this larger space has, by definition, no information to rely on: should any such information exist, it should be incorporated into the model, requiring the ‘true’ prior to be defined on
Suppose that two individuals, A and B, disagree about the probability of an event, and assign to it probabilities .6 and .4, respectively. Let us now ask individual A, If you’re so certain that the probability is .6, why can’t you convince B of the same esti- mate? Or, if B holds the estimate .4, and you can’t convince her that she’s wrong, why are you so sure of your .6? That is, we have already agreed that the estimate of .6 can’t be objectively rational, as B isn’t convinced by it. It is still subjectively rational to hold the belief .6, as there is no objective proof that it is wrong. However, A might come to ask herself, do I feel comfortable with an estimate that I cannot justify?
It follows that the Bayesian approach is supported by very elegant axiomatic derivations, but that it forces one to make arbitrary choices. Especially when the states of the world are defined, as is often the case in economic theory, as complete description of history, priors have to be chosen without any compelling justification.
The Bayesian approach is quite successful at representing knowledge, but rather poor when it comes to representing ignorance. When one attempts to say, within the Bayesian language, ‘I do not know’, the model asks, ‘How much do you not know? Do you not know to degree .6 or to degree .7?’ One simply doesn’t have an utterance that means ‘I don’t have the foggiest idea’.
[20]