In the implicit integration methods, the integration accuracy in theory depends on the tube thickness-grid ratio, when the ratio is at a constant level, the integration should generate a relatively large error. However, this is not observed for smooth surfaces (e.g. spheres). A guess of such a phenomenon is certain ergodic behavior of the offsets. For planes, this ergodicity seems related to irrational rotation. [solved.]
If one regards a weakly singular volumetric integral as a solution to a "differential" equation of infinite order (or finite order), then the boundary effect is introduced by the "boundary conditions", this effect can be removed by separating the solution into a boundary integral and a mild volumetric integral. However, the boundary integral can be handled easily if one can provide an analytic representation, but typically this is very challenging.
As a followup of 1. When a random rigid transformation is present, it appears the error estimate still depends on the regularity, while numerical experiments do not reveal such a phenomenon. [solved.]
The gradient descent method has been known to have an O(1/T) error with a fixed step size, recent works show that dynamic step sizes could make the GD converge faster, this could be further improved but if it is possible to push the bound to the optimal or nearly optimal? For the quadratic objective functions, the optimal step sizes are indeed dynamic, which comes from the Chebyshev polynomial roots, which come from an easy min-max argument.
The B-spline 2^k conjecture for condition number. Currently, the best estimate has a linear factor in k. It seems that the factor can be possibly reduced to k^{0.5}.
If all best polynomial approximations to a continuous function f on [-1, 1] vanish at the origin, can we say f is odd? This is a conjecture of George Lorentz.
The Muntz theorem states the monomials span the full L^p space while if we perturb the monomials, is the same argument still true?
The iterated Aitken method's convergence is not known as much as the other methods that utilize Pade approximations. Sometimes the iterated Aitken method can achieve far better convergence order than others while a comprehensive understanding is absent.
Is it possible to combine the bracket method (or variants) with the epsilon method? It seems the bracket method is more or less creating a rough path.
If we want to use k monomials x^n, n < m to fit (in L_inf norm) the monomial x^m, then the best choices are n=m-1, ..., m - k. That is to say, we need to find the most similar functions to fit. Does this idea generalize to other situations? For instance, does this generalize to a "monotone" family of functions f_k(x)?