Post date: Apr 9, 2017 2:23:16 AM
The main purpose of models is to represent the complex real system into simple mathematical equations and/or computer simulations that is useful for (a) understanding the current scenario and establish the causality, and (b) predicting the evolution of important variables into the future. So, models make studying an environmental system easy. Direct observations are costly, if possible, and prediction is based purely on statistical techniques which may not use advances in the basic sciences. For example, to study weather and climate there are a number of ways. One method is directly going out into the field observe, photograph, film and draw whatever conclusion based on pure observation. Remote sensing is also part of the field observation. Observation data of such method can be employed for further statistical and/or laboratory analyses. Even though observation is regarded as the best method for obtaining accurate information about the environment, it is not without its problems. If we disregard the concepts of quantum physics, the measurements are affected by uncertainties attributed to human and instrument errors. At the microscopic level, on the other hand, the accuracy of the measurements are affected by quantum uncertainty and the action of measurement that would have profound impact on the outcome of the observation. While validation of a model (e.g., regional and global climate models) is mostly regarded as a common way of representing the level of accuracy of the models, it should be taken with great caution because both the models and the observations are far from reality - All models are wrong but some are useful. Most studies on methods of models quality control suggest comparing model results with observation as a first step called empirical accuracy and with other models called robustness to get an overview of the performance of the model. The closer the models to each other and observations, the better is their performance in representing reality. In cases where the models are used for adaptation, mitigation and resiliency analyses, empirical accuracy and robustness of the models may not be enough. A model can be developed that may give good empirical accuracy and robustness but based on an assumption that would violate the scientific laws and principles. For example, a model that would violate the laws of thermodynamics at the macroscopic level may provide data which may fit observation, i.e., one can get empirically accurate results for the wrong reasons . The predictive capacity of models developed with the wrong reasons is like a house built on sand - when winds blew and rains came and flood, the house falls down. However, a model with the right reasons supported by the basic scientific principles is like a house built on rock - when the extremes hit it, it stays intact. So, relying on the basic scientific principles at the foundation of the models is a must one should look for, in addition to empirical validation and robustness of the model.
The scientific method in studying an environmental system is based on the conservation equations of energy, momentum and mass. It is based purely on the physics behind energy, momentum and mass transfer equations. Classical physics based on Newtonian mechanics is mostly enough since the speed at which the interactions occur are much less than the speed of light. However, there are situations in which the concept of quantum and relativistic physics is necessary in concepts of energy absorption, emission, scattering and other interactions between mass and light. The result is coupled partial differential equations where in analytical solutions are almost impossible to find. Numerical and statistical techniques, and methods of simplifying the differential equations through simplified theory are the only ways of obtaining information out of otherwise meaningless ( at least to the non-scientific community) series of equations. The uncertainty of the models start from these numeric and simplifying techniques. Each of these methods impart errors to the models that would propagate in each step and later become significant to cause models big uncertainty. Based on the numerical methods and assumptions to simplify the equations, the errors grow exponentially and render the model's performance sky-dive after few iterations. It is not whether someone is careful or not, errors and uncertainties are part of the mathematical computations. The problem is that there are "scientists" who deny this fact because of different reasons. Some are due to competency and inexperience in modelling, but few of them are due to ignorance. Yet others claim the pressures of getting permanent academic positions with the high number of publications. I want to leave the pressure behind for the time being and focus on the mathematical and computational (the uncertainty due to computer precision) uncertainties. Assume that we have an n class model with N1, N2, N3, ...,Nn logical flow, i.e.,
N1->N2->N3->...->Nn steps
Assume that each unit has a probability of being correct p and being wrong q, then the total probability of being correct for the whole model is P= pn , and being wrong Q = 1-pn . Assume again that the model has 100 steps and probability of being accurate p=0.993 at each step. This is ~99.3% accurate for each step and that's what eludes us mostly that our model seems working fine. But no that is not the case. The joint probability of the model being correct is P=pn=(0.993)100= 1/2. The model is surprisingly as accurate as it is wrong at the end of the step. However, no policy or business wants models with such big uncertainty. So, the question is how to minimize the uncertainties, if possible, or at least make the errors and scientific assumptions (however, wrong they are) transparent. Transparency is subjective and is problematic in modern modeling culture because of competitions among the modellers under a pressure of "publish or perish". The scientific publications themselves claim that they are under pressure because of the rating so that they accept positive results only and reject negative results. Positive and negative is presented here in a sense that the outcome of the study is in favour of or against the hypotheses. I currently saw a top level "scientific" publication journal saying they accept manuscripts only if they have statistically significant results. What kind of science is this? While statistically significant test analysis is good, it should not be a hindrance to the publication. It is just that the new proposed method doesn't lead to better results and no other scientist should repeat that so that the science progresses forward. Let us forget about the many problems of publications and focus on relevant methods useful to reduce the uncertainties in climate models.
To minimise the uncertainties in climate models, scientists have developed methods that would incorporate physical processes into grids through parametrization schemes . These schemes are evolved and adjusted becoming better everyday. To mention some of the parametrization of concern in climate models are the radiation parametrization (both long wave and short wave), the planetary boundary layer (PBL) parametrization, the convective, cloud micro-physics , soil, vegetation, urban and other processes. While models are getting better due to these methods (getting better each day), the focus on improving the basic sciences, in my view, is very minimum. For example, what are the steps taken to solve the Navier-Stokes equations analytically and how many groups are involved in finding solutions to turbulent problems despite the mathematical delinquency? What alternative physics ideas do we have? While it is good to work on minimizing the uncertainties in climate models through parameterizations, downscaling and accounting for other processes, it is equally necessary to focus on improving the basic sciences leading to better solutions to the flow equations.