(Unpublished) preprint here. Joint work with Arjun Krishnan at the University of Utah. While we do a few slightly new things, this problem (and I think a different solution) appeared earlier. In particular, as an exercise in Stanley's Enumerative Combinatorics. This does give a combinatorial (non obvious?) product rule for certain Kostka numbers.
Papers here and here. Joint work with Danielle Ensign, Sorelle Frielder, CarlosScheidegger, and Suresh Venkatasubramanian (indeed, all of these people did substantially more of the work than myself - I helped prove a related lemma and with some of the editing).
That said, I think there's something novel and appealing to using randomization intentionally in data settings as a defense mechanism against bias and noise. The mathematical model of apple tasting, and its related theory, deserves to be better known. What can you learn when there's a real cost to gaining information, and how do you balance those costs against the potential insights gained?
An undergraduate research project with Duncan Metcalfe at the University of Utah. Poster presented at University of Utah undergraduate research symposia.
The base amount of radiocarbon in the atmosphere is, sadly, not constant. Thus we don't know both the time and initial amount of radiocarbon, and so must work harder to use radiocarbon dating. There is existing software for this: OxCal.
We used this software, and made a primitive version of the same so that we could explore more of the settings. The question was if it was possible to get precise radiocarbon measurements for a certain 'poorly behaved' set of years.