Robust Bayes analysts (Berger 1994) have used many ways to define a class of priors, including
Parametric conjugate families,
Parametric but non-conjugate families,
Density-ratio (bounded density distributions),
epsilon-contamination, mixture, quantile classes, etc., and
Bounds on cumulatives.
The structure that Basu (1994) uses to represent uncertainty about the prior is a distribution band (which we call a p-box). It bounds cumulative distributions, rather than bounding densities or specifying the class in some other way. I think that this approach has not been very popular in robust Bayes analysis because it leads to the following striking triviality result.
If the prior distribution and the normalized integral of likelihood function are both constrained only by a p-box (distribution band), then, no matter the particular details of the shapes of those p-boxes, all one can ever conclude about the posterior is its range, which is always simply the intersection of the supports of the two p-boxes. It’s easy to see why this is so. If the prior and likelihood are only constrained by bounds on their integrals, then the respective classes of priors and likelihood functions must include members that cancel each other out in the way depicted in the graph at right. A shoulder on a CDF corresponds to zero density. Thus we can make the posterior surely zero everywhere where the prior and likelihood overlap, which is the intersection of their supports. We can also adjust the two functions so they cancel each other out everyone except at a single point. Because of the renormalization of mass, all the probability will flow to that point. This would make the posterior a delta function will unit probability mass at that point. We can of course place that point anywhere where the prior and likelihood overlap, which again is the intersection of their supports. Because the probability mass of the posterior can be as high as one or as low as zero anywhere, its p-box description is an interval on the intersection of the supports of the prior and the likelihood, as depicted in the graph at right. This is trivial because we knew this restriction before we knew anything about the prior or the likelihood.
We would like to explore how the additional assumptions that Basu (1994) made about symmetry and unimodality can obviate this triviality result. It would seem that unimodality alone would suffice to preclude the trivial result. What outcome could be inferred under that assumption? What other reasonable assumptions might be made, short of distributional assumption (see the previous problem), and what can be said about the posterior under such assumptions?
Our desire is to be able to make useful calculations in the face of substantial epistemic uncertainty about the prior and likelihood. It would be most useful to have a suite of results that apply under different sets of assumptions of varying strength ranging from knowing the likelihood function and prior distribution family and parameters perfectly to knowing virtually nothing about them (where the triviality result might be the best possible inference).
References
Basu, S. (1994). Variations of posterior expectations for symmetric unimodal priors in a distribution band. Sankhyā: The Indian Journal of Statistics, Series A 56: 320-334.
Basu, S., and A. DasGupta (1995). Robust Bayesian analysis with distribution bands. Statistics and Decisions 13: 333-349.
Berger, J.O. (1994). An overview of robust Bayesian analysis (with discussion). http://www.stat.duke.edu/~berger/papers/overview.ps Test 3: 5-124.