Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Bernoulli numbers and sqrt(1)+sqrt(2)+sqrt(3) + ... sqrt(1000)
Replies: 9   Last Post: Feb 18, 2013 2:34 PM

 Messages: [ Previous | Next ]
 Gottfried Helms Posts: 1,926 Registered: 12/6/04
Re: Bernoulli numbers and sqrt(1)+sqrt(2)+sqrt(3) + ... sqrt(1000)
Posted: Feb 18, 2013 7:01 AM

Am 18.02.2013 09:38 schrieb David Bernier:
> On 02/18/2013 02:26 AM, Gottfried Helms wrote:
>>
>>
>> Am 16.02.2013 07:42 schrieb David Bernier:

>>> The Bernoulli numbers can be used to compute for example
>>> 1^10 + 2^10 + ... + 1000^10 .
>>>
>>> Jakob Bernoulli wrote around 1700-1713 that he had computed
>>> the sum of the 10th powers of the integers 1 through 1000,
>>> with the result:
>>> 91409924241424243424241924242500
>>>
>>> in less than "one half of a quarter hour" ...
>>>
>>> Suppose we change the exponent from 10 to 1/2, so the sum
>>> is then:
>>> sqrt(1) + sqrt(2) + ... sqrt(1000).
>>>
>>> Or, more generally,
>>> sqrt(1) + sqrt(2) + ... sqrt(N) , N some largish positive
>>> integer.
>>>
>>> Can Bernoulli numbers or some generalization be used
>>> to compute that efficiently and accurately?
>>>

>> I've done an exploration of the integrals of the Bernoulli-
>> polynomials which I called for convenience Zeta-polynomials
>> and which I studied as a matrix of coefficients, which I
>> call "ZETA"-matrix [2]. Each row r gives the coefficients for
>> the sums of like powers with exponent r, so we get the
>> polynomials for r=0,1,2,3,... in one aggregate of numbers.
>> It is then natural to generalize the creation-rule for that
>> matrix to fractional row-indexes. However, this method
>> gives then no more polynomials but series (which is not
>> what you want, sorry...). That series have the form
>>
>> infty
>> S_r(a,b) = sum zeta(-r+c) * binomial(r,c) *((a+1)^r - b^r)
>> k=0
>>

>
>
> Are c and k related?
>
>

>> where the definition of the binomials is also generalized
>> to fractional r (the cases, when -r+c=1 must be handled by
>> replacing zeta(1)/gamma(0) by +1 or -1, don't recall the required
>> sign at the moment) It gives then the sum for the r'th powers
>> from the bases a to b in steps by 1 and for the natural
>> numbers r give the Bernoulli-polynomials in the original form
>> of Faulhaber.

>
> Maybe I should look up Faulhaber's formula.
>

>> If you are happy with approximations like in your examples,
>> this all will not of much help/inspiration though, I'm afraid..
>>
>> Gottfried Helms
>>
>> [1] http://go.helms-net.de/math/binomial_new/
>> [2] http://go.helms-net.de/math/binomial_new/04_3_SummingOfLikePowers.pdf

> [...]
>
> The summatory polynomial for the k'th powers of 1, 2, ... n,
> P(x), has the property that P(x) - P(x-1) = x^k,
> at least for positive integers x.
> I assume k is a positive integer.
>
> So, does there exist a continuous f: [a, oo) such that
> f(x) - f(x-1) = sqrt(x) for any x in [a, oo) ?
>
> Ramanujan wrote a paper on sum of consecutive square roots:
>
> http://oeis.org/A025224/internal
>
> david bernier
>

Hmm, my usual tools give only much diverging series for which my
procedures of divergent summation do not work well if I use -say-
only 64 or 128 terms for the power series.
But I've now converted the problem into one which employs the rationale:

f(x) = sqrt(1+x^2)
such that f(1) -> sqrt(2) f(f(1)) = sqrt(3) and so on such that we
can write
S(a,b) = a + f(a) + f(f(a)) + ... + f...f(a)
= f°0(a) + f°1(a) + f°2(a) + ... + f°d(a)

where d=b-a and the circle at f°k(x) indicates the k'th iterate.

After that the problem can be attacked by the concept of Carleman-matrixes
and their powers. Let F be th carleman-matrix for the function f(x),
then let G = I - F then, if we could invert G in the sense that

M = G^-1 = I + F + F^2 + F^3 + F^4 + ... (see Neumann-series, wikipedia)

then we had a solution in terms of a formal power series for

m(x) = x + f(x) + f°2(x) + f°3(x) + ....

and m(sqrt(a)) - m(sqrt(b)) would give the required sum-of-sqrt from
sqrt(a) to sqrt(b) is 1-steps progression of its argument.

However, G has a zero in the top-left entry and cannot be inverted.
Now there is a proposal which I've seen a couple of times that we invert
simply the upper square submatrix after removing the first (empty) column
in G, let's call this H, and this gives often an -at least- approximate
solution, if not exact.

But again - trying this using Pari/GP leads to nonconclusive results; the
inversion process seems to run in divergent series again.

Here we have now the possibility to LDU-factorize H into triangular
factors, which each can be inverted, so we have formally
H = L * D * U
M = H^-1 = U^-1 * (D^-1 * L^-1)

We cannot perform that multiplication due to still strong divergences
when evaluating the row-column-dotproducts (which is only making explicite
the helpless Pari/GP-attempts for inverting H)
But we can use the upper triangular U^-1 in the sense of a "change of base"-
operation. Our goal is to have M such that we can write

(V(a) - V(b)) * M[,2] = s(a,b+1) = sqrt(a)+sqrt(a+1)+...+sqrt(b)

where V(x) means a vandermondevector V(x) = [1,x,x^2,x^3,x^4,....]

But now we can from the formal formula
(V(a) - V(b)) * M[,2] = (V(a) - V(b)) * U^-1 * ( D^-1 * L^-1)[,2]
and can compute

X(a,b) = (V(a) - V(b)) * U^-1
exactly to any truncation size because U^-1 is upper triangular and thus
column-finite.
Then we can as well do
Q = ( D^-1 * L^-1)[,2]
which is- besides the truncation to finite size, an exact expression.

Still we have, that the dot-product
s(a,b) = X(a,b) * Q
is divergent, but now it seems, that we can apply Euler-summation
for the evaluation of the divergent dot-product.
The results are nice approximations for the first couple of
sums s(1,4), s(2,4) and some more. s(1,9) requires Euler-summation
of some higher order such that I get

s(1,9) ~ 16.3060
(where 16.3060005260 is exact to the first 11 digits)

Don't know how to do better at the moment; surely if the composition
of the entries in the matrices were better known to me, one could
make better, possibly even analytical or at least less diverget, expressions.

(But I've not enough time to step into this deeper, I'm afraid)

Gottfried Helms

Date Subject Author
2/16/13 David Bernier
2/17/13 David Bernier
2/17/13 David Bernier
2/18/13 David Bernier
2/18/13 Gottfried Helms
2/18/13 David Bernier
2/18/13 Gottfried Helms
2/18/13 Gottfried Helms
2/18/13 David Bernier
2/18/13 Gottfried Helms