https://tetrationforum.org/showthread.php?tid=1339
Tommy's Gaussian method :
f(s) = exp(t(s) * f(s-1))
t(s) = (erf(s)+1)/2
Notice that t(s - oo) = 0 and t(s + oo) = 1 for all (finite complex) s.
A visual of t(w) would demonstrate that it converges fast to 0 or 1 in the (resp) left and right triangle of an x shaped region.
That x shape is almost defined by Re(w)^2 = Im(w)^2 thus making approximately 4 90 degree angles at the origin and having only straith lines.
Therefore we can consistantly define for all s without singularities or poles ( hence t(s) and f(s) being entire ! )
f(s) = exp( t(s) * exp( t(s-1) * exp( t(s-2) * ...
thereby making f(s) an entire function !
Now we pick a point say e.
to consistently define
exp^[s](e)
and then by analytic continuation from e to z ;
exp^[s](z).
We know this analytic continuation exists because f(s) is entire and for some appropriate q we must have exp^[q](e) = z.
By picking the correct branch we also got the slog function.
It should be as simple as ( using small o notation )
lim n to +oo , Re( R ) > 0 ;
exp^[R](z) = ln^[n] ( f( g(z) + n + R) ) + o( t(-n+R) )
and ofcourse using the appropriate brances of ln and g.
***
For the external proof with error1(s) I have the idea to mainly use the absolute value.
To manage the imaginary parts , the idea is that we take the correct log branches in a consistant way and then we always take the same branch for the same neighbourhood ... thus a branch jump of at most 1 down or 1 up from the infinitesimal neigbourhood.
This leads to the partial error term o(L) ( little -o notation for absolute value bound )
where L satisfies L - 2pi = ln(L)
< follows from L = 2 pi + ln(2 pi + ln(2 pi + ... ) >
L = 8.4129585032844...
Notice that ln(1) is never 0 , it must be another branch.
ln(0) never occurs.
***
This method also works for base
X^X = X