Computational physics is the study and implementation of numerical analysis to solve problems in physics.[1] Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline (or offshoot) of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics - an area of study which supplements both theory and experiment.[2]
There is a debate about the status of computation within the scientific method.[4] Sometimes it is regarded as more akin to theoretical physics; some others regard computer simulation as "computer experiments",[4] yet still others consider it an intermediate or different branch between theoretical and experimental physics, a third way that supplements theory and experiment. While computers can be used in experiments for the measurement and recording (and storage) of data, this clearly does not constitute a computational approach.
Computational physics problems are in general very difficult to solve exactly. This is due to several (mathematical) reasons: lack of algebraic and/or analytic solvability, complexity, and chaos. For example, - even apparently simple problems, such as calculating the wavefunction of an electron orbiting an atom in a strong electric field (Stark effect), may require great effort to formulate a practical algorithm (if one can be found); other cruder or brute-force techniques, such as graphical methods or root finding, may be required. On the more advanced side, mathematical perturbation theory is also sometimes used (a working is shown for this particular example here). In addition, the computational cost and computational complexity for many-body problems (and their classical counterparts) tend to grow quickly. A macroscopic system typically has a size of the order of 10 23 \displaystyle 10^23 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system[5] and for classical N-body it is of order N-squared. Finally, many physical systems are inherently nonlinear at best, and at worst chaotic: this means it can be difficult to ensure any numerical errors do not grow to the point of rendering the 'solution' useless.[6]
Because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Between them, one can consider:
Furthermore, computational physics encompasses the tuning of the software/hardware structure to solve the problems (as the problems usually can be very large, in processing power need or in memory requests).
Due to the broad class of problems computational physics deals, it is an essential component of modern research in different areas of physics, namely: accelerator physics, astrophysics, general theory of relativity (through numerical relativity), fluid mechanics (computational fluid dynamics), lattice field theory/lattice gauge theory (especially lattice quantum chromodynamics), plasma physics (see plasma modeling), simulating physical systems (using e.g. molecular dynamics), nuclear engineering computer codes, protein structure prediction, weather prediction, solid state physics, soft condensed matter physics, hypervelocity impact physics etc.
These opportunities will need to be fully integrated with traditional IT, and will require new cross-collaboration from research scientists, computational data scientists and quantum data scientists. This new development paradigm is critical to the success of any quantum program.
PHY 0061 Quantum Theory. Fundamental theoretical basis for quantum mechanics with selected applications. Wave-particle duality, Schrodinger wave equation, energy quantization in bound state problems, wave packets and scattering, quantization of angular momentum, spin, entangled states, Pauli exclusion principle.
This challenge is not unique to ecology. In fact, most optimization problems cannot be solved directly, and one has to resort to heuristic search strategies [20, 21]. For microbial communities, a simple heuristic search (a greedy gradient-ascent) proceeds via a series of steps. At each step, a set of new communities is created by adding or removing one or a few species from the current microbial community. The community with the best performance is then chosen for the next step. Although easy to implement, the search can get stuck at a local optimum and achieve only a small fraction of the best possible performance. It is therefore essential to identify when such heuristic search is likely to succeed.
To incorporate the secretion and uptake of metabolites by microbes, we examined the microbial consumer-resource model with cross-feeding [33]. In the cross-feeding model, each species leaks a fraction, l, of resources it consumes in the form of metabolic byproducts. The composition of these byproducts is specified by the metabolic leakage matrix , with matrix element specifying the amount of resource Î leaked when the species consumes resource Î. This leakage is weighted by the ratio of resource qualities wÎ/wÎ so that energy-poor resources cannot produce a disproportionate amount of energy-rich resources. Each row of the leakage matrix sums to one. The leakage matrix and resource qualities were independent of species identity to respect potential universal stoichiometric constraints on species metabolism. The dynamical equations of the model with cross-feeding are:(4)Here, resource uptake rates were assumed to be of Monod form, . We simulated the system of ODEs explicitly using the community simulator package to obtain steady-state abundances. Since direct simulation of the ODEs is computationally expensive, we simulated smaller candidate species pools with S = 12. Simulations reached steady state if the root mean square of the logarithmic species growth rates rates fell below a threshold, i.e., . We imposed an abundance cutoff at periodic intervals during the numerical integration to hasten the extinction of species. We verified that the extinct species could not have survived in the community by simulating a re-invasion attempt of the steady-state community.
The fraction of variance unexplained by the model of order k, U(k), is given by(15)where is the mean function; the numerator and denominator are the explained sum of squares and total sum of squares respectively. The number of higher order terms grows exponentially with the number of species, and so inverting matrices for linear regression became computationally expensive for large landscapes. Therefore, we used an alternative method to calculate the the fraction of variance unexplained for large landscapes based on Walsh decomposition (see S1 Text Sec. 3).
In Lagaris et al [89], the solution of a differential equation is expressed as a constant term and an adjustable term with unknown parameters, the best parameter values are determined via a neural network. However, their method only works for problems with regular borders. Lagaris et al [90] extends the method to problems with irregular borders.
Finally, more recent advancements by Kondor and Trivedi [83], and Mallat [102], brought to Raissi et al [146] solution that extended previous notions while also introducing fundamentally new approaches, such as a discrete time-stepping scheme that efficiently leverages the predictive ability of neural networks [82]. The fact that the framework could be utilized directly by plugging it into any differential problem simplified the learning curve for beginning to use such, and it was the first step for many researchers who wanted to solve their problems with a Neural network approach [105]. The success of the PINNs can be seen from the rate at which Raissi et al [146] is cited, and the exponentially growing number of citations in the recent years (Fig. 1).
Being able to learn PDEs, PINNs have several advantages over conventional methods. PINNs, in particular, are mesh-free methods that enable on-demand solution computation after a training stage, and they allow solutions to be made differentiable using analytical gradients. Finally, they provide an easy way to solve forward jointly and inverse problems using the same optimization problem. In addition to solving differential equations (the forward problem), PINNs may be used to solve inverse problems such as characterizing fluid flows from sensor data. In fact, the same code used to solve forward problems can be used to solve inverse problems with minimal modification. Moreover, in the context of inverse design, PDEs can also be enforced as hard constraints (hPINN) [101]. Indeed, PINNs can address PDEs in domains with very complicated geometries or in very high dimensions that are all difficult to numerically simulate as well as inverse problems and constrained optimization problems.
Physically-informed neural networks can address problems that are described by few data, or noisy experiment observations. Because they can use known data while adhering to any given physical law specified by general nonlinear partial differential equations, PINNs can also be considered neural networks that deal with supervised learning problems [52]. PINNs can solve differential equations expressed, in the most general form, like:
38c6e68cf9