Physical phenomena are often modeled by PDEs. To unveil new physical laws, it is important to study how to infer the PDE coefficients from experimental data. For evolution-type PDEs, if the initial condition is sufficiently complicated in terms of Fourier modes, then almost surely the PDE can be uniquely determined by the solution data. For short-time dynamics, it is simple to find a bound for the number of modes, and it is likely to be optimal. But for long-time behavior, is it possible to find a smaller bound?
Clearly, the PDE learning problem would be ill-posed for high-order equations or nonlinear equations. The instability is another issue that needs to be understood. [partly solved for short-time instability.]
For long-time instability, assuming the asymptotic expansion in time, is it possible to show a uniform bound for the dissipative system? [seems related to 5]
For short-time dynamics, if the initial condition is in the zero eigenspaces (steady solution), then we lose all dynamics, thus there is no reconstruction. If the initial condition is a solution to the power of the differential operator, it seems the instability will also deteriorate mildly. How to correctly quantify this?
It has been shown that a solution trajectory can span the full solution space if the initial condition is rich enough, where the order needs to be larger than the dimension. For the counterpart, it is not clear how to give a good statement for the convergence other than absolute convergence. Of course, the convergence sometimes cannot hold, see the Hausdorff moment problem of Widder's book on the Laplace transform.
In the diffusion limit, the time-dependent transport in the slab becomes a 1D second-order parabolic equation, the solution subspace is spanned by a trajectory. It remains to see if it still holds, at least away from the boundary, the angular average solution still sits near the sampled subspace.
It is also interesting to see if there exists a point source that satisfies the decay condition of coefficients when decomposing into eigenfunctions of a self-adjoint elliptic operator. [ It can be proved for 2nd order divergence form on a compact manifold without boundary, the idea depends on the doubling index estimate on wild sets instead of regular cubes.]
If the operator is not self-adjoint, a similar approach still works if the eigenfunctions form a complete basis.
The Hyperbolic case seems simple with the dimension mismatch statement, but it has a connection with the trigonometry moment problem.
The sensitivity of the subspace with respect to the operator perturbations. The perturbations come from two parts, one is on the operator itself, which meets a regularity requirement. Ideally, the smooth perturbations would not change the subspace dramatically. The other is from random noises, which have a considerable impact on the subspace.
Besides the parabolic case, the elliptic case of divergence form can also be studied. Instead of dealing with the tensor coefficient directly, it is possible to study the scalar case first. The resulting problem becomes whether or not (w.h.p.) a solution exists to allow a unique coefficient.
It seems the nonlinear parabolic case is no more difficult if one uses the homotopy idea.
As a follow-up to 8, the same result applies to the integral operator case.
The PDE learning for integral operators is easier, but also brings new issues. One is similar to a subproblem from compressive sensing on decoupling two sensing matrices (operators). Another is the hybrid case, including both differential and integral operators. It seems that the differential part dominates the problem nature.
Learning the concise form of PDE from data. Compared to the original approach dealing with coefficients, it is even more aggressive to get the relations between coefficients as much as possible. [The preference can be carried out by graph dependence of derivatives that minimizes the graph depth or emphasizes the form in nabla symbols. ]
The neural network solutions to PDEs are explored in several ways, of course, in high dimensions. Possibly the NN solution finds a better way to redistribute the parameters to represent the solution with (roughly) an equal number of parameters. However, the computation burden is shifted to the massive search. Is there a way to quantify the search difficulty and max-min accuracy? [It seems this relates to the so-called Rajōmon set of a loss function. The accessibility of the set in the parameter space (denoted by S) can be characterized as the "distance" from a point in S to the set. Here the "distance" (metric) could be crucial. It could be simpler if S is topologically equivalent to Euclidean space. ]
Inspired by the fact that diffusion limits of transportation become a velocity-independent equation, it is natural to ask a similar question on RTE learning, provided the velocity-average data, is it possible to learn the governing equation for the dynamics? [Unique determination is proved, but stability is currently unknown.]
The low-rank property of the elliptic PDEs' kernel is known mainly due to the Caccioppoli type energy estimate, it is also known that the kernel can be represented as a boundary integral with respect to itself. Is it possible to derive a low-rank representation from the boundary integral? If not, what is the counter-example?