Post date: Jan 30, 2014 9:31:10 PM
C-boxers:
I hope I'll soon get around to creating a "Say" page for the public and private c-box Google Sites in which the material of this email should be archived. In the meantime, below are some residual notes from drafting the manuscript for the ICVRAM paper.
I still need to write the language explaining computing with c-boxes and why we can get away with it even though the CD people cannot. There was no space (or time) for it in the ICVRAM draft, but it's obviously a big hole that we'll soon need to address.
The ICVRAM paper discusses c-boxes and the derivative p-boxes analogous to prediction distributions. I have notes from Michael somewhere about how to define and derive tolerance structures, but of course I've misplaced them and I wasn't able to reconstruct them in the rush to finish the draft for the deadline. But I think these tolerance structures are potentially a big deal, and maybe even a strict advantage over the Bayesian results. We should struggle to work them out, at least for the normal and nonparametric cases, as exemplars for how they work and how cool they'll be.
It looks like I can intervalize some of Michael's formulas, including even for the binomial coefficient. By this I mean that it looks like I can replace calls to the env function with interval parameters as the distribution arguments. I'm not sure I can always do it, but I was surprised that I was able to do it at all. There are two issues: Does it still return the correct edges of the c-boxes? Should we do it if it then implies that all of the contents are combed [that is, that they are all beta distributions]? This would probably be a very good reason not to do it.
I remind us that we still don't know whether our ancillary moment propagation software is needed and appropriate for use with c-box constructors and the subsequent calculations that they might be involved in.
I've worked a little bit more on the other Google Site for the #6 problem, which is the binomial(n,p), with both parameters are unknown. But it still needs work.
I recollect that MIchael said that his c-boxes were turning out to be the same as Bayesian posteriors under the Jeffreys prior, at least in the unidimensional cases. But I guess I don't have any examples of that. The normal mu seems to match the reference (MDIP) prior, but I guess it's two-dimensional with both mu and sigma. The binomial p certainly doesn't match Jeffreys either. Is it for the same reason, or is there also a caveat about discrete v. continuous? I guess I should look to exponential as the example. Are there others?
Given that we admit that c-boxes are not unique, and that I plan to characterize this as a feature rather than a bug (even though I claim it's a bug for the Bayesians), it would be nice to have some examples where there's more than one c-box for one of our problems. Obviously it'd be easy to make a trivial one by fattening an existing c-box, but that wouldn't do for my purpose. I don't think Michael actually gave an example in his first paper, did he? Did you, Michael? Do you have an example of two c-boxes for a single parameter context?
We may be adding Steve Munch to our list of collaborators, and coauthors. He very graciously came through with a last-minute pronouncement on the whole nonparametric Bayesian thing that I think will keep us away from the very hot water we might otherwise had endured in the review process. I will put his communications onto that Say page as soon as I can.
Cheers,
Scott