The main goal is to find good approximations with entire functions for a given analytic function.
And then to apply it to other problems.
The main idea or tool is finding appropriate taylor series that converge everywhere, and such that the coefficients are all positive reals.
So for a given g(z) that maps positive reals to positive reals , we want to find a taylor series f(z) that approximates g(z) for positive real z.
Or in other words f(x) is asymptotic to g(x) for all x > 1.
f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ...
with a_n > 0.
For this to be possible we require and demand g(x) > 0 , d/dx g(x) > 0 , d^2/d^2 x g(x) > 0 for x >= 0.
g(x) may or may not be entire already.
So sometimes a perfect fit could exist. And it is a good way to test our methods. Or a good way to estimate the taylor coefficients.
So lets start :
We want our f(x) to approximate g(x) from below.
So f(x) > g(x) for some x>=0 is not acceptable.
We want f(x) =< g(x) for all x >= 0.
We use truncated taylor series to find the a_n.
since g(1) should be close to a_0 + a_1 + a_2 + ....
we can say a_0 < g(1) ( and a_0 < f(1) ofcourse )
next we set
a_0 + a_1 x + a_2 x^2 + ...+ a_n x^n < f(x) or a_0 + a_1 x + a_2 x^2 + ...+ a_n x^n < g(x)
We truncate
a_n x^n < a_0 + a_1 x + a_2 x^2 + ...+ a_n x^n < g(x)
and then we do a manoever , we set
a_n x^n < g(x)
and take it more sharply to get an upper bound :
a_n x^n = g(x)
and from there
a_n = g(x) / x^n
a_n = min [ g(x)/x^n ]
where the minimum is consider for all real x > 0.
This is already very interesting. You could do alot with that.
since both a_n and g(x)/x^n are positive for x >= 0 we can consider this
a_n = g(x) / x^n
ln(a_n) = ln(g(x)) - n ln(x)
ln(a_n) = min [ ln(g(x)) - n ln(x) ]
So let t_n(x) = ln(g(x)) - n ln(x)
Then
ln(a_n) = min [ ln(g(x)) - n ln(x) ] = min [ t_n(x) ]
Now a minimum must occur when the derivative is zero.
min [ t_n(x) ] = t_n(q_n)
where q_n is the solution to d/dx t_n(x) = 0.
d/dx t_n(x) = g'(x)/g(x) - n/x.
g'(q_n)/g(q_n) - n/q_n = 0
q_n g'(q_n)/g(q_n) = n
So q_n is a function - or inverse function - of n here.
Finally
ln(a_n) = t_n(q_n)
This is the basic idea and then we try improving that. And make conjectures. What are both being done with succes.
More about that later.
An example
exp(x) = g(x)
ln(a_n) = min [ ln(g(x)) - n ln(x) ]
ln(a_n) = min [ x - n ln(x) ]
[!!!] substitute x = exp(y) [!!!]
ln(a_n) = min [ exp(y) - n y ]
The derivative of exp(y) - n y = exp(y) - n.
exp(y) - n = 0
gives us
y = ln(n).
so
ln(a_n) = min [ exp(y) - n y ] = exp(y) - n y = exp(ln(n)) - n y = n - n ln(n) = n( 1 - ln(n) ) = - n (ln(n) - 1)
taking exp on both sides
a_n = exp( - n( ln(n) - 1) ) = 1 / exp(n ln(n) - n ) = e^n / n^n.
Now we know the best solution is 1/n! but this is pretty close !
It resembles stirlings approximation for the factorial !
We are within a factor of about sqrt(n) to the best solution.
This factor is not a coincidence and is related to the conjectures.
There are better techniques that get the sqrt(n) part for exp(x).
***
We use the notation fake( * ) to denote a fake function of *.
"fake" here depends on the method used ofcourse.
Notice that altough taylor series are additive
A(x) = a_0 + a_1 x + a_2 x^2 + ... , B(x) = b_0 + b_1 x + b_2 x^2 ...
A(x) + B(x) = (a_0 + b_0) + (a_1 + b_1) x + (a_2 + a_2 ) x^2 + ...
the operator fake does not neccessarily have the additive property for every method ( or the one above )
fake( A(x) + B(x) ) =/= fake( A(x)) + fake(B(x))
Also
fake(A(B(x)) =/= fake(A(t)) with t = B(x).
Although we could use those ideas as part of a method for some cases.
In the beginning I said its goal was to find entire functions.
But what if the given function g(x) does not satisfy the criterion.
Fake function theory still helps !
For instance take g(z) = sqrt(1+z) for Real(z) > 0.
Notice the maximum modulus property dictates that the max ,in absolute value, in the disk with radius r occurs at the boundary.
And for this case it occurs on the positive real line.
Similarly this is true for real entire taylors with positive coefficients.
So the real positive line dominates.
This implies that the fake also has this property.
And therefore it will also be a relatively good approximation for the halfplane Real(z) > 0.
So how do we get fake( sqrt(1+x) ) ?
We use this trick.
fake( sqrt(1 + x) ) = fake ( exp(-x) sqrt(1+x) exp(x) ) = exp(-x) fake ( exp(x) sqrt(1 + x) )
And this works very well.
In fact, it is about as good as any more traditional method from numerical methods , calculus, real or complex analysis.
Related is the mittag-leffler function.
See also :
https://math.stackexchange.com/questions/930186/why-is-this-function-a-really-good-asymptotic-for-expx-sqrtx?noredirect=1&lq=1
Similary we can compute
fake( ln(1 + x^2) ) = exp(- x^2 ) fake( ln(1 + x^2) exp(x^2) )
This was done/mentioned at some places like for instance here :
***
Another application is to find taylor series that approximate the semi-exp(x) where semi-exp(x) is defined by
semi-exp( semi-exp(x) ) = exp(x).
You can find a way to find such a semi-exp by the looking at the page " tetration " on this site.
I will add the example later.
In fact this is the origin of the fake function theory developped by myself (Tom Raes) and Sheldon Levenstein.
***
Notice the related integral :
H(x) = integral from 1 to oo of f(t) x^t dt.
which is very close to taylor or fake function theory ofcourse since sums and integrals can be close to each other.
***
Tommy's approximation formula ( mine )
Let g(x) = x^j(x)
and j(x) > 0 , d/dx j(x) > 0
and j(x) < sqrt(1 + x ) + 1
and k(j(x)) = j(k(x)) = x
(k(x) is the functional inverse of j(x))
then using big O notation
fake( x^j(x) ) gives a_n = O (1/ k(n-1) )
Ofcourse only potentially valid if this gives an entire function. What is a backward condition.
The formula works better for slowly growing functions.
You can also use the formula in reverse :
Consider
f ' (x) = f(x/2)
then
f(x) = sum x^n / ( n ! 2^t(n) )
where t(n) are the triangular numbers.
and you can derive that
f(x) must be close to x^( C (ln(x)/ln(2))^V ) for some V around 1/2 and some modest constant C.
See also
Sheldon also has formula's such as his contour integral method.
But first let me give another example
Like I said before :
**
Another application is to find taylor series that approximate the semi-exp(x) where semi-exp(x) is defined by
semi-exp( semi-exp(x) ) = exp(x).
You can find a way to find such a semi-exp by the looking at the page " tetration " on this site.
**
lets try that now.
a_n = min [ semi-exp(x) / x^n ]
ln( a_n ) = min [ ln(semi-exp(x)) - n ln(x) ]
substitute x = exp(y)
ln( a_n ) = min [ ln( semi-exp( exp(y) )) - n y ]
Now ofcourse functional composition of ln , exp and semi-exp commute so
ln( a_n ) = min [ semi-exp(y) - n y ]
Let D(x) = d/dx semi-exp(x)
then min [ semi-exp(y) - n y ] occurs at y = r
and D(r) - n = 0
So r = D^[-1](n)
where D^[-1](x) is the functional inverse of D(x). ( D^[-1](D(x)) = x )
Finally
ln( a_n ) = semi-exp(r) - n r
ln( a_n ) = semi-exp(D^[-1](n)) - n D^[-1](n)
Now this is nice but another additional approximation gives some insight. Although it might make the result less accurate.
D^[-1](x) is somewhat close to ln( semi-exp(x) ) or semi-exp( ln(x) ) at least in terms of iterations of exp.
( semi-ln(x) = ln(semi-exp(x)) = semi-exp(ln(x)) is also used )
Therefore another approximation is
ln( a_n ) = semi-exp( semi-exp(ln(n)) ) - n ln( semi-exp(n) )
ln( a_n ) = exp( ln(n) ) - n ln ( semi-exp(n) )
ln ( a_n ) = n - n ln ( semi-exp(n) )
a_n = exp(n) / ( exp( n ln( semi-exp(n) ) ) ) = exp(n) / ( exp( ln(semi-exp(n) ) )^n
This simplifies further ;
a_n = exp(n) / (semi-exp(n))^n
or equivalently
a_n = ( e / semi-exp(n) )^n
The self-reference is striking.
We already knew that semi-exp could not be expressed or even well approximated by standard functions and integers.
But a good fitting taylor series with positive coefficients also has no good asymptotic with standard functions and integers.
But it can be approximated somewhat by standard functions and itself.
Notice that tommy's approximation formula mentioned above would give a result closer to 1/semi-exp(n) which is much worse, but also has the self-reference.
Like I said , the tommy approximation formula works better for slower functions. ( like exp( ln(x)^2 ) for instance )
Ofcourse these are not the best methods, this is just an introduction.
On the other hand , some methods work better for some functions than for others.
Ofcourse numerical testing has been done for various methods and functions.
Till now we restricted ourselves to the real line and taylor series.
But the implications go beyond.
And the theory also.
For instance the zero's of the entire function that approximates a given g(z) are also a key component of the theory. And of the related conjectures.
Especially if those zero's are real.
Nevanlinna theory relates , although not obviously.
https://en.wikipedia.org/wiki/Nevanlinna_theory
More related is Wiman-Valiron theory
https://en.wikipedia.org/wiki/Wiman-Valiron_theory
and for slowly growing functions we have the strongly related Hadamard factorization theorem :
https://en.wikipedia.org/wiki/Hadamard_factorization_theorem
especially if g(x) < x^( sqrt x )
( order 0 )
Basically - in that case - we get that our function f(x) ( the taylor that approximates g(x) ) is completely and simply defined by its zero's :
f(x) = f(0) (1 + x/t_1) (1 + x/t_2) (1 + x/t_3) ...
and this holds for all complex numbers x , not just the real x.
So those a_n and t_n relate.
And there are ways to get t_n directly ( not from the a_n ) with their own methods and conjectures. etc etc
See also the page here " Special polynomials " which is related.
However , apart from real zero's or complex zero's , there are also " fake " or " bad " zero's.
For instance
exp(z) is known never to be 0.
But you could compute exp(z) = (1 + z/t_1)(1 + z^2/t_2)(1 + z^3/t_3)...
Thats what I mean with fake or " bad " zero's.
See my warning : quote :
Also to warn others , I want to mention gottfriend " dream of a sequence " again.
It " pseudofactors " the exp(z).
Why does that matter ? Well because theorems about stable polynomials assume factorizations to hold everywhere.
Hence the analogue theorems for (stable) Taylor series might fail strongly !
To refresh your memory :
http://www.go.helms-net.de/math/musings/...quence.pdf
https://www.go.helms-net.de/math/musings/dreamofasequence.pdf
Where we take exp(z) / (1 + z) = 1 + (1/2) x^2 + ... etc and get exp(x) = (1+z)(1 + (1/2) x^2)...
Ofcourse this does not converge at radius 1. But the illusion of entire or zero's is there !
It is very strongly believed the fake(semi-exp(x)) has only (negative ) real zero's and zero's at a known place and there are some deep reasons.
( one reason is exp(z) = 0 has no finite complex solution z )
( Btw taking the ln on both sides is helpful :
sum ln(1 + z/t_n) gets close to sum a_n x^n in a way. )
**
For the fake ( semi-exp(x) ) I remarked ( where D = d/dx ) :
**
Back to basics
In addition to post 17,18 notice that
D exp^[1/2](x) = D exp^[1/2] (exp(x) ) * exp(x) / exp^[1/2](exp(x)).
That follows from ln exp^[a] ( exp(x) ) = exp^[a](x) and the chain rule for derivatives.
By induction / recursion this gives a nice way ( product ) to compute the derivative.
**
Which improves and gives more insight.
I mentioned the integral analogue earlier.
This can sometimes help too.
Example :
As a small example :
integral from 1 to +oo [ t^x g(t) dt ]
with g(t) = exp(- ln(t)^2 )
equals :
(1/2) * ( erf((x+1)/2) +1) * sqrt(pi) * exp( (1/4)* (x+1)^2 ).
another useful example is
integral from 1 to +oo [ t^x g(t) dt ]
with g(t) = ln(t)^v
equals :
v! (x-1)^(-v-1)
for Re(x)>1.
thereby connecting to laurent series and more.
**
The following might be confusing and it surely needs more work, but it shows the connection to valiron :
I wanted to point out a kind of reverse of fake function searching.
Suppose we want to find the fake function for f(x).
then we suspect a candidate
f(x) = sum f_n x^n.
Now we try the following :
1) estimate max_n : max f_n x^n.
To do that set
f_a x^a = f_(a+1) x^(a+1)
This a = g(x) is an indicatator for the growth rate and dominant terms of the taylor series.
Notice it depends on x.
Now compute another similar indicator , this time for the inferior terms of the taylor series :
2) estimate max_b : f_b x^b < 1
you might want to set f_n x^n = (x/h_n)^n for convenience.
it follows x around h_n is getting close.
b = h(x) is again an indicator of the growth rate and number of dominant terms we at least need for our taylor.
We now found the " start of the tail " of the taylor series.
3) Using a = g(x) and b = h(x) , we are now kinda ready to estimate
I = integral f_n x^n d n from 0 to infinity.
by comparing to the estimate
(f_c + f_d) x^c = min f(x)
what is the basis for fake function theory :
=> min f(x) / x^c = (f_c + f_d)
and compare that f_c , f_d , c , d , a , b and the integral I to f(x) ,
giving us a good idea of how good our original estimate was and a way to improve it.
If f_c/(f_c + f_d) is close to *constant* you know fake function estimate was quite decent.
Or if lim f_c/(f_c + f_d) is close to *constant* you know fake function estimate was quite decent asymptotically.
This is a sketch of the idea , ofcourse calculus tricks can and should be applied !
But that application is case specific , since we might be working with complicated functions.
Also I used integral but ofcourse any approximation of the sum that is good will do, in particular - if possible - actual sum formulas.
Iterating these estimate methods always results in a good estimate.
I wanted to prove that formally but it might take some time.
**
There exists a method that does better most of the time.
It is based on contour integrals and the gaussian integral.
More later.
Check out the subsections !