Heh, this is fairly interesting. I'd like to give my two cents on what you are doing briefly by taking a detour through fractional calculus. This relates very largely to the indefinite sum, fractional iteration, differsums, and recurrence relations in general. It's largely why I got so into fractional calculus in the first place--which seems to me, you're stumbling upon one of the fundamental analytic continuations I use.
To begin, observe, one I consider, the simplest way to analytically continue the Gamma function.
Break the integral into

. When we look at,
And this converges everywhere in

. The second part of the integral,
Converges everywhere in the complex plane, giving us the analytic continuation of
)
into

,
It's difficult who to attribute this analytic continuation to, I have heard it attributed to Euler, so it goes back a fair way.
The second player to the game is Ramanujan, who although didn't provide an absolutely rigorous description of the following phenomena, but still displayed the correct answer in a lot of cases (we can now rigorously say where and when it works). This is called Ramanujan's master theorem. The manner in way Ramanujan would write it is very indicative of the general theory.
Let us consider

a type of operator (really a function, but this was the original language),
And then note that,
Which of course, is nothing but a formalism, which does display some truth. The correct way, as would write it now, is to write,
And, Ramanujan wrote, sort of unjustified, that,
Now, this clearly won't work for arbitrary functions

. It will only work for certain functions, and to determine which functions is surprisingly not that hard. Let us take a function

holomorphic for
 > -1)
, let us assume that
| \le C e^{\tau|\Im(z)|})
for

as
| \to \infty)
. And to keep things simple, lets assume that
| \le C e^{\rho|\Re(z)|})
as
 \to \infty)
for arbitrary

.
Now write the inverse mellin transform as,
Now, the poles of this integrand in the left half plane, are at the negative integers, and their residues are,
Doing a standard exercise in contour integration, and thanks to the nice asymptotics of the Gamma function (think Sterling), and the exponential bounds on

, we get,
Now, by way of the mellin transform (thanks to the fourier transform), we know instantly that
 = \mathcal{O}(1/x))
and therefore,
And so we can take the mellin transform, and by the invertible nature,
Now, we enter in the above analytic continuation we did for the Gamma function (which works in the exact same manner as above), to arrive at,
Which is valid for
 <1)
. And upon here, we enter the third player, the Riemann-Liouville differintegral. Now there are a bunch of differintegrals, but the one we want is the Riemann-Liouville because it is really the mellin transform in disguise. We write this, in a slightly nonstandard form, but it's really nothing but a change of variables. Here, we change our function of interest to remove the pesky alternation. Let us write,
Now, we have an equivalence here. I'm going to skip a bit, but bear with me. Assume that
|)
has decay of the form
)
for
| < \theta < \pi/2)
as

in this sector. Then there exists a function
)
which is holomorphic for
 > -b)
and satisfies
| \le Ce^{(\pi/2 - \theta)|\Im(z)|})
and
\Gamma(-z) \to 0)
in a fast enough manner to be integrated, and for the sum to be entire.
With this we have an isomorphism between differintegration, and very well behaved functions in the right half plane

. Now, it's important to remember the rules of differintegration. This is that,
So that integration is inherently integration with the lower limit at negative infinity (this is important to ensure consistency). And it is hereupon where we can return to Ramanujan's notation a bit more rigorously. I'll take the example of fractional iteration, as it's a nice one to start with (though the same heuristic works for indefinite sum, indefinite product, and the sort). Let us write,
Which is a linear operator. Now, we write,
Let's assume that this is differintegrable (which is the real challenge, in showing this happens). For the case that

has a fixed point with multiplier

this is doable, thanks to Schroder's equation (and is doable in other scenarios as well). And using Ramanujan's abuse of notation, we write,
And, let us combine this with the semi-group property of the differintegral,
Which tells us the semi-group property of the differintegral preserves when iterating operators! Thus, all we really need is to prove the auxiliary function is differintegrable to express it in a manner that only depends on its natural iterates. We use

to create

; so long as we have a differintegrable condition (which proves very hard to guarantee from first principles unfortunately). Also, next to this, we are guaranteed a uniqueness condition. I refer to this as Ramanujan's identity theorem, a much stronger version was proven by Carlson (earning him a fields medal)--but we don't need the strength of Carlson.
Assume that

is one of these well bounded functions. And let's assume that
 = 0)
for all

. Well by the isomorphism, and the interchange, our function,
And therefore, when taking the differintegral, we prove that

. Which implies that if
 = 0)
, is bounded in this nice manner, it must be zero everywhere. From this, we can derive, if we have two functions

and

bounded in the appropriate manner, and satisfy
 = E_2(n))
then

is bounded the same and is zero on the naturals, therefore

everywhere and

. Which lets us conclude we have a uniqueness condition on this space.
This is infinitely helpful when talking about iterations. For the indefinite sum, for instance, all we need is that it's bounded in this manner--it's unique. For the fractional iteration, for instance, all we need is that it's bounded in this manner--it's unique. Etc etc...
And here is where we come to what I think you are beginning to uncover. Though, you are looking at the Gamma function, and rather than the Gamma function as multiplier. Let us take a function

which exists in this bounded space, and let us assume that
 =1)
for the natural numbers. Then we are guaranteed that

everywhere, because it is in this space. Now, this is actually a uniqueness condition for the Gamma function in disguise. Where we are instead declaring the uniqueness of the equation,
Which appears trivial, but it really isn't. Because there can be no function

and

such that,
where
 =1)
and

.
Now how this all ties to Andy's slog, seems to be that we are going to work in the same manner. Now I can't speak for Andy's slog (I do not know enough), but if it's holomorphic for
 > -1)
and it satisfies these bounds (which makes perfect sense because of how slow the slog function grows). We would declare a uniqueness condition as,
Which means there is only one function in this space which interpolates the values of Andy's slog at

--and this solution happens to solve the slog equation. Now I do not know if this works, because I don't know where Andy's slog is holomorphic (or if it even is). I do not know if any slog will work tbh, because I've never studied their domains of holomorphy. But the bounds shouldn't be a problem, just whether they're holomorphic in a right half plane. This doesn't help constructively much, because finding the values
)
are rather impossible without solving

first.
This means, the manner you are using, truncating matrices and approximating solutions--may be equivalent to the Ramanujan method using the differintegral. It certainly is using the gamma function. Which may be helpful for me to re-explain. Assume that,
interpolates the factorial and satisfies
 = g(z+1))
Then, additionally assume that
}{\Gamma(z)})
is in this nice space, then
}{\Gamma(z)} = 1)
. Now, this may seem trivial, again. But the manner in which you are breaking up the Gamma function in your paper, is actually pretty much all of the above in disguise.
I hope this might be helpful. Anyway, had fun writing it. Least of all, it should explain a lot of the work I've been showing you lately.
Regards, James.
EDIT:
I also thought it may be helpful to describe how the functional relationship of Indefinite Summation can be interpreted in the fractional calculus method. Let us assume that

is in this bounded space; and define,
Now, these functions satisfy the relationship,
Now, we know that,
And, as I showed in my paper that I posted in the other thread, the function

is also differintegrable. Therefore define,
Then,
Which is the functional equation of indefinite summation. We can do this with many functional equations; and it seems pretty universal in its effectiveness. The same procedure works with indefinite products (but it's a tad less effective). It works with Matrix operators as well, if we consider

a diagonalizable square nxn matrix; and assuming we have a certain kind of distribution of the eigenvalues (I don't know the word for it, I can write it out if you want), then the iteration,
Is just as convergent in the matrix space. Giving a rather quick way of iterating matrices... This may help in your truncating matrix method and iterating them. But, don't ask me how--it just occurred to me it might help.