Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Quantum computers cannot compute trancendentals?
Replies: 25   Last Post: Jan 18, 2014 12:45 AM

 Messages: [ Previous | Next ]
 David Bernier Posts: 3,892 Registered: 12/13/04
Re: Quantum computers cannot compute trancendentals?
Posted: Jan 18, 2014 12:45 AM

On 01/01/2014 11:23 AM, David Bernier wrote:
> On 12/24/2013 04:24 AM, David Bernier wrote:
>> On 12/24/2013 02:16 AM, David Bernier wrote:
>>> On 12/20/2013 05:37 PM, quasi wrote:
>>>> Jimmy Weissmuller wrote:
>>>>> Clark Smith wrote:
>>>>>> 7 wrote:
>>>>>>>
>>>>>>> The quantum 'logic gates' I have cannot in any shape
>>>>>>> or manner computer a trancendental number directly.

>>>>
>>>> Can you represent the transcendental number e _exactly_
>>>> in finite time _without_ a computer?
>>>>
>>>> That depends on what you mean by "represent".

>>>
>>> [...]
>>>
>>> In algebra-type manipulations, we can replace an expression by
>>> an expression with the same value; the hard part is first deriving
>>> identities, (e.g. arc-tangent formulas for pi), but
>>> also the non-trivial question of which of numerous equal
>>> expressions to do a substitution with ...
>>>
>>> In more fine-tuned susbstition questions, it might be good
>>> to target a specific precsion, such as 100,000 places after the
>>> decimal point in computing the real and imaginary parts
>>> of zeta(0.5 + i*14.13), and if using the Euler-MacLaurin method,
>>> I've found I need to include say the term B_70000, the
>>> 70,000th Bernoulli number:
>>>
>>> < http://en.wikipedia.org/wiki/Bernoulli_number >
>>>
>>> These mysterious numbers are rational, with rather slow-growing
>>> denominator, and >= exponential growth of magnitude for the
>>> even-indexed Bernoulli numbers.
>>>
>>> For instance, | B_70000 | is of the order of 10^252887.
>>> Now if it multiplies a number of the order 10^(-350887),
>>> the product is of order 1/(10^98000).
>>>
>>> If we're targeting 100,000 decimals after the point, it seems to
>>> me that the 2000-3000 most signifcant digits only of
>>> | B_70000 | are needed ...
>>>
>>> So, if our new problem is to compute | B_70000 | closely
>>> enough so that the leading 3000 digits (the most significant)
>>> are all correct, it could be much easier than using e.g. Simon
>>> Plouffe's method (designed to compute the Bernoullis exactly).
>>>
>>> His method for B_6 = 1/42 would involve using
>>> 2*zeta(6)*(6*5*4*3*2*1)/((2*Pi)^6)
>>>
>>> or 2*zeta(6)*6!/((2pi)^6) [exact]
>>> with zeta(6) ~= 1 + 1/(2^6) + ... 1/(k^6) , small k,
>>> and additional constraints from theorems on the denominator or
>>> fractional part, etc. to pin down B_6 exactly as a rational.
>>>
>>> For |B_70000 |,
>>>
>>> (2*70000!/((2pi)^70000))*(1+1/2^70000)
>>> should deliver perhaps the 20000 most signicant digits
>>> of |B_70000 |.
>>>
>>> So, for efficient computation to some pre-determined number of
>>> decimals, in Euler-MacLaurin summation which is a general method,
>>> then even Bernoulli numbers grow so fast and the terms they yield
>>> can be so small that "a fraction" of the precision involved
>>> in computing the rational value is sufficient.
>>>
>>> Anyway, I find "algebraic" substitions in standard and special
>>> function computations quite fascinating.

>>
>> [...]
>>
>> I've experimented with B_120,000 , which is negative and of absolute
>> value of the order of 10^461608 , with the objective of obtaining
>> an approximation with ~= 200,000 significant digits.
>>
>> This is what I came up with to be issued to PARI/gp as a command:
>> (to get | B_120,000 | in fact ):
>>
>> ? D=exp(log(-bernfrac(120000)));
>>
>> The terminal semi-colon suppresses output.
>>
>> The timing was about 6.6 seconds.
>>
>> It does indeed seem to deliver the leading 200,000 digits, but
>> expresses it in scientific notation , which is fine.
>>
>> On the other hand, bernreal(.), which delivers a floating-point number,
>> isn't working, needs more memory:
>>
>> ? bernreal(120000);
>> *** at top-level: bernreal(120000)
>> *** ^----------------
>> *** bernreal: not enough memory
>> *** Break loop: type 'break' to go back to GP
>> break>
>>
>> I haven't counted the claimed 200,000 displayed digits in | B_120,000 |,
>> but it ends as follows:
>>
>> ...................570339600007904973943801586581893 E376755
>> [ precision 200,000 significant digits ].
>>
>> Now I see that most of the time is spent computing the Bernoulli number
>> as a rational:
>>
>> ? bernfrac(120000);
>> time = 5,137 ms.
>>
>> But it's much improved if I use the formula that I believe Plouffe
>> used to compute Bernoulli numbers, and use PARI/gp 's
>> in-built floating-point factorial :
>>
>> ? V=120000;C= factorial(V); C=2*C; C= C/((2*Pi)^V);C=C*zeta(V);
>> time = 293 ms.
>>
>> The formula used can be found as Equation (41) of this MathWorld page:
>>
>> < http://mathworld.wolfram.com/BernoulliNumber.html > .
>>

>
> It's indeed possible to save time in Bernoulli number computation by
> using MathWorld's Equation (41).
>
> This is for use in the Euler-MacLaurin formula for zeta(.).
>
> One part of Euler-MacLaurin for computing zeta(.) involves
> adding-up many terms in the partial sum:
>
> sum_{n = 1 ... N-1} 1/n^s, to compute zeta(s);
>
> the analytic formula for the even-index Bernoulli numbers doesn't help
> here.
>
> I timed the case N = 16300, nu = 37000 with 50,000 decimals
> at s = first non-trivial zeta zero:
>
> http://www.dtc.umn.edu/~odlyzko/zeta_tables/zeros2
>
>
> so s = 1/2 + i*14.13472514173469...
>
> The time was 3h 22min for 42,570 decimals accuracy.
>
> cf.:
> <
> http://meditationatae.wordpress.com/2014/01/01/experimenting-with-euler-mclaurin-summation-for-zeta-function/

>>.
>
> Copied from there:
>
> "One term that requires a lot of time to
> compute is sum_{n = 1, ? N-1} n^(-s) ; the analytic formula for the
> Bernoulli numbers doesn?t help with that ?partial sum? term. "
>
> So, it's an on-going experiment to see what's possible.
> Also, a test of the PARI/gp calculator's "consistency",
> by trying many formulas ...

[...]

I've gone back to the Quad-core Intel in the mean-time. You can
notice absence of *.sig in the 1 January post, due to using
AMD5000+ alternative PC ...

I had already a tentative rho_0 of zeta with imaginary part to
90,000 digits after the point from earlier on.

From Section 6.4 of Edwards book on the Riemann Zeta Function,
we have in Euler-MacLaurin for zeta:

- the partial sum term, of N-1 summands

- two miscellaneous terms that count for little work

- the B_2, B_4, ... B_{2 nu} terms involving even-index
Bernoulli numbers.

Using the analytic formula (41) at MathWorld,
http://mathworld.wolfram.com/BernoulliNumber.html

we can compute Bernoulli numbers from values of zeta.

I'm closing-in on zeta(rho_1) precise to 90,000 digits;

still, the partial sum term of N-1 summands takes a lot of time,
and more digits (say 150,000) will require a larger N.

So, the good news is that the Bernoulli terms go faster than with
exact rational evaluation through PARI/gp 's bernfrac(.) function.

The bad news is that for the summands here:

- the partial sum term, of N-1 summands

the time is the same as before, for identical N.

? g = sum(X=1,N-1,exp(-s*log(X)));
time = 19h, 45min, 50,152 ms.

Almost 20 hours with N = 35000.

I kept N as is, and increased 'nu' or 'khyber' in PARI/gp:

khyber = 74733 (this is 'nu' )

? t=0; q=s/(2*N*exp(s*log(N))); q=q/(Pi*Pi); \\ setting-up 'q'

for(X=1,khyber,t=t+ (((-1)^(X-1))*zeta(2*X))*q; \\ Bernoulli number
q = q*((s+2*X-1)*(s+2*X)/(N*N)); q=q/(4*Pi*Pi) ); \\ loop (***)

t = t + exp((1-s)*log(N))/(s-1) + exp(-s*log(N))/2 ; \\ misc. terms
z2 = t+g; \\ adding partial sums 'g' .

time = 6h, 59min, 40,015 ms.

This results in 89,972 decimals accuracy (almost zero).

B_2n = [(-1)^(n-1)*2*((2n)!)/((2Pi)^(2n)] zeta(2n)

(Mathworld Bernoulli numbers, eqn. (41) )

As indicated previously,

| B_{120,000} | ~= 10^461608

but for some purposes such as Euler-MacLaurin, we don't need 461,000
significant digits of B_{120,000}, including those at
`` \\ loop (***) '' above.

The summands there, past nu=1000, are very small in absolute value.
So, 90,050 significant digits in every multiplicant should be about
Ok.

Also, Fredrik Johansson has computed rho_1 to 1 million bits,
and also computed the Stieltjes constants, Keiper-Li coefficients
to record precision, according to his arxiv pre-print:

http://arxiv.org/abs/1309.2877

It points to a 20,000 digits precision repository of zeta zeros (see
inside).

David Bernier

--

Date Subject Author
12/20/13 Jimmy Weissmuller
12/20/13 quasi
12/20/13 7
12/20/13 quasi
12/20/13 7
12/20/13 quasi
12/21/13 7
12/21/13 R Kym Horsell
12/21/13 JT
12/20/13 Port563
12/21/13 R Kym Horsell
12/28/13 Wizard
12/28/13 Wizard
12/28/13 hanson
12/21/13 R Kym Horsell
12/21/13 JohnF
12/24/13 David Bernier
12/24/13 David Bernier
12/24/13 David Bernier
1/1/14 David Bernier