Post date: Apr 07, 2021 7:10:46 PM
Hey, Ryan,
Trust you enjoyed your holiday, and that vaccination and epidemiologic freedom are right around the corner for you.
As a part of some work under a multi-institutional programme grant on “digital twins” (a currently fashionable but possibly fatuous topic in engineering), we’ve been doing some thinking about a cloud-based platform for sharing scientific and engineering models and analyses.
I’m sure you agree that traditional publication is an utterly inappropriate way to promulgate these sorts of scholarly outputs. It’s not just timeliness issues, but provenance and accessibility issues at both superficial and profound levels. Something better than what we have should have been possible with Neil Ferguson’s covid model, to augment their reports and the Coursera (!) production. National pandemic policy in the UK and beyond was based on this model, but I’m not sure everyone would agree that peer review was properly exercised.
Models are a dime a dozen, or ought to be, and we really shouldn’t be littering the literature with them. I think someone has argued that model results without the source code is just advertising, not worthy of journal publication. At the same time accessibility to source code is hardly accessibility in a real sense. You can obscure a lot in some sloppy code.
Yet models can also be very important, nationally, institutionally, and even personally. And their development history and dependence on data and modeller choices should be explicit, recorded, and traceable.
The envisioned platform would share and handle GitHub-style branching, but it could also do a lot more, including automatically generating language-selectable user interfaces that allow the models and analyses to be re-run with new data or variant assumptions, in support of broader accessibility or even citizen science. It also automatically checks units, rescales plots and creates new plots among the variables , handles elementary uncertainty propagation, tracks extrema and computes risk-analytic outputs, and records data provenance and justifications of modelling choices so that the model can be interrogated in ways that a static report cannot.
Some of ideas are collated at the https://sites.google.com/site/davmarkup/.
I was wondering whether this seems interesting to you with respect to your work on Researchers.One. I’m very keen on the idea of sisters doing it for themselves. And I note that you say “more is coming” on Researchers.One. (By the way, got confused by ResearchOne. Have you exchanged letters with them yet?)
Obviously this is mostly a crackpot idea at this point, but I was thinking it might be fun to host a symposium or a special session somewhere. Or maybe even a proposal if we get the right collaborators.
Any thoughts?
Best regards,
Scott