A lot of quantitative work in the modern world is devoted to solving particular problems, to optimising engineering designs, to addressing emerging crises, to assess the possible consequences of old decisions and new circumstances. Traditional scientific publication is usually an utterly inappropriate way to promulgate such quantitative models and analyses. It’s not just timeliness issues, but provenance and accessibility issues at both superficial and profound levels. Next-generation science and engineering need to reconsider these sorts of scholarly outputs and provide better mechanisms to make them known to the world.
The covid pandemic has illustrated that we really need infrastructure and conventions that are substantially better than what we currently have. National policy in the UK and beyond was based on Professor Neil Ferguson’s model, but it is hard to argue that it underwent a thorough peer review despite his team's thoughtful reports and videos. Models can be very important, nationally, institutionally, and even personally. And their development history and dependence on data and modeller choices should be explicit, recorded, and traceable.
At the same time, these sorts of models are a dime a dozen, or ought to be, and we really shouldn’t be littering the literature with them. Many writers have argued that a paper about a model or its results without the source code is just advertising, not worthy of journal publication. Yet even provisions to make source code accessible rarely guarantees accessibility in any real sense that enables proper scientific review. You can obscure a lot in some sloppy code.
What's needed is a cloud-based platform for sharing scientific and engineering models and analyses with code sharing like GitHub, visualisation and data analytics like Plotly, linking like Orcid and the Semantic Web, facilities for collaboration like Wikipedia, and security like blockchain. We envision a platform that would share codes and handle branching, but also support automatically generated language-selectable user interfaces that allow the models and analyses to be re-run with new data or variant assumptions, in support of broader accessibility or even citizen science. It also automatically checks units, rescales plots and creates new plots among the variables, handles sensitivity analysis and elementary uncertainty propagation, tracks extrema and computes risk-analytic outputs, and records data provenance and justifications of modelling choices so that the model can be interrogated in ways that a static report cannot.
As a part of some work under a multi-institutional programme grant on “digital twins” in engineering, Marco De Angelis and his collaborators are creating a platform for sharing scientific and engineering models and analyses that facilitates better peer review with a user-interface generator for scientific and engineering codes enabling modelers to share their models with others without bothering with designing a user interface.
Github (https://github.com/Digital-Twin-Operational-Platform)