Suppose we know the following about the random variables W, H and A:
W × H = A
W ∈ [22, 33], E(W) = 24, V(W) = 4
H ∈ [112, 150], E(W) = 24, V(W) = 144
A ∈ [2000, 3200], E(W) = 3100, V(W) = 40,000
which says, for instance, that the random variable W can only have values within the range [22, 33], and that its mean and variance are 24 and 4 respectively. We do not know anything about the dependence among the variables, and indeed, the specified means and variances are not consistent with any simple independence assumptions.
We can analyze this information in Risk Calc using this run stream:
W = WW = mmmv(23, 33, 24, 4);
H = HH = mmmv(112, 150, 120, 144);
A = AA = mmmv(2000, 3200, 3100, 40000);
W = imp(W, A/H);
H = imp(H, A/W);
A = imp(A, W*H)
clear; show W in red; show WW in blue
clear; show H in red; show HH in blue
clear; show A in red; show AA in blue
which makes the following outputs:
The blue p-boxes correspond to the constraints on the random variables’ CDFs given just their ranges and moments. The tighter red p-boxes represent the constraints when the fact that A=WH is taken into account.
How would a robust Bayesian express and solve this analysis? How do the robust Bayes posterior classes compare to the red p-boxes? How would the solution strategy be different if we additionally knew that W and H were independent of each other (and had moment constraints consistent with such dependence)?