|
Re: Bernoulli numbers and sqrt(1)+sqrt(2)+sqrt(3) + ... sqrt(1000)
Posted:
Feb 18, 2013 2:34 PM
|
|
Am 18.02.2013 18:40 schrieb David Bernier: > On 02/18/2013 07:17 AM, Gottfried Helms wrote: >> I've to correct some obvious typing errors: >> >> Am 18.02.2013 09:38 schrieb David Bernier: >>> On 02/18/2013 02:26 AM, Gottfried Helms wrote: >>>> >>>> >>> >>> The summatory polynomial for the k'th powers of 1, 2, ... n, >>> P(x), has the property that P(x) - P(x-1) = x^k, >>> at least for positive integers x. >>> I assume k is a positive integer. >>> >>> So, does there exist a continuous f: [a, oo) such that >>> f(x) - f(x-1) = sqrt(x) for any x in [a, oo) ? >>> >>> Ramanujan wrote a paper on sum of consecutive square roots: >>> >>> http://oeis.org/A025224/internal >>> >>> david bernier >>> >> Hmm, my usual tools give only much diverging series for which my >> procedures of divergent summation do not work well if I use -say- >> only 64 or 128 terms for the power series. >> But I've now converted the problem into one which employs the rationale: >> >> f(x) = sqrt(1+x^2) >> such that f(1) -> sqrt(2) f(f(1)) = sqrt(3) and so on such that we >> can write >> S(a,b) = sqrt(a) + f(sqrt(a)) + f(f(sqrt(a))) + ... + f...f(sqrt(a)) >> = f°0(sqrt(a)) + f°1(sqrt(a)) + f°2(sqrt(a)) + ... + f°d(sqrt(a)) >> >> where d=b-a and the circle at f°k(x) indicates the k'th iterate. >> >> After that the problem can be attacked by the concept of Carleman-matrixes >> and their powers. Let F be th carleman-matrix for the function f(x), >> then let G = I - F then, if we could invert G in the sense that >> >> M = G^-1 = I + F + F^2 + F^3 + F^4 + ... (see Neumann-series, wikipedia) >> >> then we had a solution in terms of a formal power series for >> >> m(x) = x + f(x) + f°2(x) + f°3(x) + .... >> >> and m(sqrt(a)) - m(sqrt(b)) would give the required sum-of-sqrt from >> sqrt(a) to sqrt(b) in 1-steps progression from its argument a. >> >> However, G has a zero in the top-left entry(and the whole first column) >> and cannot be inverted. >> Now there is a proposal which I've seen a couple of times that we invert >> simply the upper square submatrix after removing the first (empty) column >> in G, let's call this H, and this gives often an -at least- usable >> approximate solution, if not arbitrarily exact. >> >> But again - trying this using Pari/GP leads to nonconclusive results; the >> inversion process seems to run in divergent series again. >> >> Here we have now the possibility to LDU-factorize H into triangular >> factors, which each can be inverted, so we have formally >> >> H = L * D * U >> M = H^-1 = U^-1 * (D^-1 * L^-1) >> >> We cannot perform that multiplication due to still strong divergences >> when evaluating the row-column-dotproducts (which is only making explicite >> the helpless Pari/GP-attempts for inverting H) >> >> But we can use the upper triangular U^-1 in the sense of a "change of base"- >> operation. Our goal is to have M such that we can write >> >> (V(sqrt(a)) - V(sqrt(b))) * M[,2] = s(a,b+1) = sqrt(a)+sqrt(a+1)+...+sqrt(b) >> >> where V(x) means a vandermondevector V(x) = [1,x,x^2,x^3,x^4,....] >> >> But now we can proceed from the formal formula >> >> (V(sqrt(a)) - V(sqrt(b))) * M[,2] >> = (V(sqrt(a)) - V(sqrt(a))) * U^-1 * ( D^-1 * L^-1)[,2] >> >> and can compute >> >> X(sqrt(a),sqrt(a)) = (V(sqrt(a)) - V(sqrt(a))) * U^-1 >> >> exactly to any truncation size because U^-1 is upper triangular and thus >> column-finite. >> >> Then we can as well do >> >> Q = ( D^-1 * L^-1)[,2] >> >> which is -besides the truncation to finite size- an exact expression. >> >> Still we have, that the dot-product >> >> s(a,b) = X(sqrt(a),sqrt(a)) * Q >> >> is divergent, but now it seems, that we can apply Euler-summation >> for the evaluation of the divergent dot-product. >> The results are nice approximations for the first couple of >> sums s(1,4), s(2,4) and some more. s(1,9) requires Euler-summation >> of some higher order such that I get >> >> s(1,9) ~ 16.3060 >> (where 16.3060005260 is exact to the first 11 digits) >> >> or >> s(5,10)= sqrt(5)+sqrt(6)+ ... + sqrt(10) >> ~ 16.32199 >> where 16.3220138163 is exact to the first 11 digits) >> >> >> Don't know how to do better at the moment; surely if the composition >> of the entries in the matrices were better known to me, one could >> make better, possibly even analytical or at least less diverget, expressions. >> >> (But I've not enough time to step into this deeper, I'm afraid) > > Hi Gottfried, > > The exponent for the natural numbers that I'm most interested > in is a = -s , with s = 1/2 + i*t , t real in [0, oo) as: > > 1^a + 2^a + ... + (N-1)^a > or > 1 + 1/2^s + ... + 1/(N-1)^s because in Euler-MacLaurin > summation for zeta(s), > 1/1^s +1/2^s + ... + 1/(N-1)^s is computed, for some N > |t| . > > The other term is one a sum involving the Bernoulli numbers, and > and also (1/2) N^(-s) + N^(1-s)/(s-1) [ Edwards]. > > But a "shortcut" to evaluate: > 1/1^s +1/2^s + ... + 1/(N-1)^s to high-precision would > save time in the Euler-MacLaurin summation for zeta(s), > so this connects to algorithms to compute zeta(s) to > high-precision, a much-studied topic. > > It seems pretty hard too ... > > David > > > Hmm, after reading that I remember I've tried to accelerate the computation of zeta/eta(s) with arguments in that area by Euler-summation of adapted (namely complex) order. It seemed to me, that this a veritable "poor-mans" acceleration for the actual computations. Actually it comes out to be
n 1 eta(s) ~ sum c(s,o,k) * --- *(-1)^k k=1 k^s
for some sufficient large (finite) n and the c(s,o,k) coefficients gotten by the formulae of the Euler-summation of an optimal order o, best adapted for the argument s.
And then zeta(s) = eta(s)/(1-2*2^-s)
I need not high n, say n~100, for very usable approximations for my needs. I have made the influence of the orders o for the summation process visible in a excel-sheet, where the o's can continuously be changed and then the trajectory of the resulting partial sums becomes "longer" or "shorter" (needing greater or smaller n to come farther away or nearer to the final result) depending on o. Very impressive in my view! If this is interesting I could upload that excel-sheet to my webspace. (It's of "garage"-type, just for the experimenter, no nice brushed appearance)
Gottfried Helms
|
|