Consider the a_n as given in Fake function theory (part 1) and in Fake function theory part 2 (without the linear rescaling ).
Assume the extra condition d^3/d^3 x g(x) > 0 for x >= 0.
Then the following is usually an improvement :
s_n = a_n / sqrt [ 2 pi d^2/d^2 x g(q_n) ]
You can try this with for instance the exp(x) and then you will see that this method gives the stirling formula for n ! , thereby being a very good approximation !
Also exp(ln(x)^2) or exp(exp(x)) are improved by this.
Ok, you might think, but how and why ? Where does it come from ?
It relates to contour integrals.
A thought experiment ...
Let g(x) = exp( H( ln(x) ) ).
The a_n coefficients can be thought of as calculating a Cauchy integral around the circle centered around the origin for the x^n term. The radius is chosen to be the "best fit" for the x^n term.
Lets look at how a contour integral formulation looks like for a taylor series and derive the result from there :
We assume g(x) is already entire or a_n = the actual taylor terms for the sake of the argument ... Lets just say g(x) = f(x) for now.
And thus also f(x) = exp( H( ln(x) ) ).
a_n = 1/(2 pi) Contourintegral f(x) / x^(n+1)
Probably the only practical way to calcute this integral for any f(x) is by using a circle centered on the origin :
a_n = 1/(2 pi) integral[ - pi i , pi i ] exp( - n x ) f( exp(x) ) dx
radius=1 Cauchy integral formula. but we can use any radius
a_n = 1/(2 pi r^n ) integral[ - pi i , pi i ] exp( - n x ) f( r exp(x) ) dx
so
a_n = 1/(2 pi r^n ) integral[ - pi i , pi i ] exp[ - n x + H( ln(r) + x ) ] dx
Now ln(r) = q_n because q_n is the extremal value.
Thus
a_n = 1/[2 pi exp( n q_n ) ] integral[ - pi i , pi i ] exp[ - n x + H( q_n + x ) ] dx
notice - n x + H( q_n) = t_n(q_n) if x = q_n.
and if H(q_n + x ) = h_0 + h_1 x + h_2 x^2 + ... then
a_n = exp(t_n(q_n)) / (2 pi ) integral [ - pi i , pi i ] exp[ - n x + H( q_n + x ) ] - exp[ - n + H(q_n) ] dx
so
a_n = exp(t_n(q_n)) / (2 pi ) integral [ - pi i , pi i ] exp(h_1 x + h_2 x^2 + h_3 x^3 + ...) dx
notice h_1 is actually 0 by the definition of q_n.
a_n = exp(t_n(q_n)) / (2 pi ) integral [ - pi i , pi i ] exp(h_2 x^2 + h_3 x^3 + ...) dx
Notice that by symmetry of the integral x^(2k+1) vanishes.
So
a_n = exp(t_n(q_n)) / (2 pi ) integral [ - pi i , pi i ] exp(h_2 x^2 + h_4 x^4 +h_6 x^6 + ...) dx
so
(new ) a_n = (old) a_n / (2 pi ) integral [ - pi i , pi i ] exp(h_2 x^2 + h_4 x^4 +h_6 x^6 + ...) dx
This leads to the idea to truncate
(new ) a_n = (old) a_n / (2 pi ) integral [ - pi i , pi i ] exp(h_2 x^2) dx
because h_2 > 0 but the other h_n might not be > 0 !!
( the idea to use a fake for that taylor to resolve it occured to us ! )
A further approximation (worse but simpler) gives us
(new ) a_n = (old) a_n / (2 pi ) integral [ - oo i , oo i ] exp(h_2 x^2) dx
where we recognize the gaussian integral.
Finally for say the semi-exp(x) we have h_2 = d^2/d^2 x g(q_n) because it commutes by composition with exp(x).
new a_n = s_n = a_n / sqrt [ 2 pi d^2/d^2 x g(q_n) ]
is thus an improvement for the semi-exp(x).
And others. but not all.
Consider fake(h_2 x^2 + h_4 x^4 + h_6 x^6 + ...) = j_2 x + j_4 x^2 + j_6 x^3 + ...
and then take
J(x) = j_2 x^2 + j_4 x^4 + j_6 x^6 + ...
T_n = exp(t_n(q_n)) / (2 pi ) integral [ - pi i , pi i ] exp(j_2 x^2 + j_4 x^4 +j_6 x^6 + ...) dx
This is the tommy-sheldon formula.
and it can be iterated ; used for computing J etc.
Further methods can be complicated.
Also keep in mind that the contour integrals are well defined here in this context for entire functions and getting taylor coefficients from them.
But if g(x) is not entire it is a bit of a manoevre.
If g(x) is however analytic on the imaginary axis and the half-plane Re(z) , then this is a good thing for the approximation, because the integral makes more sense and is over an analytic domain more or less.
Finding solutions H or close for g(x) = exp( H( ln(x) ) ) is a subtopic ofcourse. And again might require approximations because there might not be a closed form.
This is sometimes the hidden reasons why some approximation formulas work better or worse and it is not immediately clear why.
More to come...
Thank you for your attention.