Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math.independent

Topic: Bernoulli numbers and sqrt(1)+sqrt(2)+sqrt(3) + ... sqrt(1000)
Replies: 9   Last Post: Feb 18, 2013 2:34 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
David Bernier

Posts: 3,276
Registered: 12/13/04
Re: Bernoulli numbers and sqrt(1)+sqrt(2)+sqrt(3) + ... sqrt(1000)
Posted: Feb 18, 2013 12:40 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On 02/18/2013 07:17 AM, Gottfried Helms wrote:
> I've to correct some obvious typing errors:
>
> Am 18.02.2013 09:38 schrieb David Bernier:

>> On 02/18/2013 02:26 AM, Gottfried Helms wrote:
>>>
>>>

>>
>> The summatory polynomial for the k'th powers of 1, 2, ... n,
>> P(x), has the property that P(x) - P(x-1) = x^k,
>> at least for positive integers x.
>> I assume k is a positive integer.
>>
>> So, does there exist a continuous f: [a, oo) such that
>> f(x) - f(x-1) = sqrt(x) for any x in [a, oo) ?
>>
>> Ramanujan wrote a paper on sum of consecutive square roots:
>>
>> http://oeis.org/A025224/internal
>>
>> david bernier
>>

> Hmm, my usual tools give only much diverging series for which my
> procedures of divergent summation do not work well if I use -say-
> only 64 or 128 terms for the power series.
> But I've now converted the problem into one which employs the rationale:
>
> f(x) = sqrt(1+x^2)
> such that f(1) -> sqrt(2) f(f(1)) = sqrt(3) and so on such that we
> can write
> S(a,b) = sqrt(a) + f(sqrt(a)) + f(f(sqrt(a))) + ... + f...f(sqrt(a))
> = f°0(sqrt(a)) + f°1(sqrt(a)) + f°2(sqrt(a)) + ... + f°d(sqrt(a))
>
> where d=b-a and the circle at f°k(x) indicates the k'th iterate.
>
> After that the problem can be attacked by the concept of Carleman-matrixes
> and their powers. Let F be th carleman-matrix for the function f(x),
> then let G = I - F then, if we could invert G in the sense that
>
> M = G^-1 = I + F + F^2 + F^3 + F^4 + ... (see Neumann-series, wikipedia)
>
> then we had a solution in terms of a formal power series for
>
> m(x) = x + f(x) + f°2(x) + f°3(x) + ....
>
> and m(sqrt(a)) - m(sqrt(b)) would give the required sum-of-sqrt from
> sqrt(a) to sqrt(b) in 1-steps progression from its argument a.
>
> However, G has a zero in the top-left entry(and the whole first column)
> and cannot be inverted.
> Now there is a proposal which I've seen a couple of times that we invert
> simply the upper square submatrix after removing the first (empty) column
> in G, let's call this H, and this gives often an -at least- usable
> approximate solution, if not arbitrarily exact.
>
> But again - trying this using Pari/GP leads to nonconclusive results; the
> inversion process seems to run in divergent series again.
>
> Here we have now the possibility to LDU-factorize H into triangular
> factors, which each can be inverted, so we have formally
>
> H = L * D * U
> M = H^-1 = U^-1 * (D^-1 * L^-1)
>
> We cannot perform that multiplication due to still strong divergences
> when evaluating the row-column-dotproducts (which is only making explicite
> the helpless Pari/GP-attempts for inverting H)
>
> But we can use the upper triangular U^-1 in the sense of a "change of base"-
> operation. Our goal is to have M such that we can write
>
> (V(sqrt(a)) - V(sqrt(b))) * M[,2] = s(a,b+1) = sqrt(a)+sqrt(a+1)+...+sqrt(b)
>
> where V(x) means a vandermondevector V(x) = [1,x,x^2,x^3,x^4,....]
>
> But now we can proceed from the formal formula
>
> (V(sqrt(a)) - V(sqrt(b))) * M[,2]
> = (V(sqrt(a)) - V(sqrt(a))) * U^-1 * ( D^-1 * L^-1)[,2]
>
> and can compute
>
> X(sqrt(a),sqrt(a)) = (V(sqrt(a)) - V(sqrt(a))) * U^-1
>
> exactly to any truncation size because U^-1 is upper triangular and thus
> column-finite.
>
> Then we can as well do
>
> Q = ( D^-1 * L^-1)[,2]
>
> which is -besides the truncation to finite size- an exact expression.
>
> Still we have, that the dot-product
>
> s(a,b) = X(sqrt(a),sqrt(a)) * Q
>
> is divergent, but now it seems, that we can apply Euler-summation
> for the evaluation of the divergent dot-product.
> The results are nice approximations for the first couple of
> sums s(1,4), s(2,4) and some more. s(1,9) requires Euler-summation
> of some higher order such that I get
>
> s(1,9) ~ 16.3060
> (where 16.3060005260 is exact to the first 11 digits)
>
> or
> s(5,10)= sqrt(5)+sqrt(6)+ ... + sqrt(10)
> ~ 16.32199
> where 16.3220138163 is exact to the first 11 digits)
>
>
> Don't know how to do better at the moment; surely if the composition
> of the entries in the matrices were better known to me, one could
> make better, possibly even analytical or at least less diverget, expressions.
>
> (But I've not enough time to step into this deeper, I'm afraid)


Hi Gottfried,

The exponent for the natural numbers that I'm most interested
in is a = -s , with s = 1/2 + i*t , t real in [0, oo) as:

1^a + 2^a + ... + (N-1)^a
or
1 + 1/2^s + ... + 1/(N-1)^s because in Euler-MacLaurin
summation for zeta(s),
1/1^s +1/2^s + ... + 1/(N-1)^s is computed, for some N > |t| .

The other term is one a sum involving the Bernoulli numbers, and
and also (1/2) N^(-s) + N^(1-s)/(s-1) [ Edwards].

But a "shortcut" to evaluate:
1/1^s +1/2^s + ... + 1/(N-1)^s to high-precision would
save time in the Euler-MacLaurin summation for zeta(s),
so this connects to algorithms to compute zeta(s) to
high-precision, a much-studied topic.

It seems pretty hard too ...

David



--
dracut:/# lvm vgcfgrestore
File descriptor 9 (/.console_lock) leaked on lvm invocation. Parent PID
993: sh
Please specify a *single* volume group to restore.



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.