Drexel dragonThe Math ForumDonate to the Math Forum

Ask Dr. Math - Questions and Answers from our Archives
_____________________________________________
Associated Topics || Dr. Math Home || Search Dr. Math
_____________________________________________

The Logarithm of a Complex Number

Date: 07/11/2007 at 21:49:15
From: Naresh
Subject: log(a+i*b)

I'm wondering how to calculate the LOG of a complex number.  More
specifically, I need to calculate ACOSH(x), where 'x' can be 
any negative number. 

As I am not using std C++ complex library, Hyperbolic cos inverse 
was calculated using 

acosh(x) = log((x) + sqrt((x^2)-1)), where x can be negative. 

This is required while calculating CHEBYSHEV polynomial.



Date: 07/13/2007 at 12:22:55
From: Doctor Vogler
Subject: Re: log(a+i*b)

Hi Naresh,

Thanks for writing to Dr. Math.  I'd like to answer your question in 
at least three different ways.  So let's start with your actual 
question.

The log function, on complex numbers, is a multi-valued function. 
Just as all (nonzero) numbers have not one but two square roots
(because there are two numbers with the same square), so too every
(nonzero) complex number has infinitely many logarithms, because
there are infinitely many numbers that you can raise e to and get the
same result.  But they all differ by some integer multiple of 2*pi*i.
So a complex logarithm is really only defined up to adding some
integer multiple of 2*pi*i.  For example, log(-1) might be pi*i, and
it might be -pi*i.  One common convention is to always take the
imaginary part between -pi*i (exclusive) and +pi*i (inclusive), much
like square roots of positive real numbers are usually taken to be
positive, but any such convention has its problems.  For example,
logarithm rules like

  log(ab) = log(a) + log(b)

might be off by a multiple of 2*pi*i.  (E.g. try a = b = -1.)

That said, the logarithm of a complex number can be computed with real
functions as follows:

  log(a + bi) = (1/2)log(a^2 + b^2) + i*t

where t is an angle (in radians, thus defined up to adding some 
multiple of 2*pi), measured counterclockwise from the positive real
axis, that points at (a, b) from the origin (0, 0).  This can be
computed, for example, using the arctan function:

  t = arctan(b/a), when a > 0
  t = pi + arctan(b/a), when a < 0
  t = pi/2, when a = 0 and b > 0
  t = -pi/2, when a = 0 and b < 0

But this will actually get t in the range -pi/2 <= t < 3*pi/2.  If you
want to use the convention -pi < t <= pi, then you should use:

  t = arctan(b/a), when a > 0
  t = pi + arctan(b/a), when a < 0 and b >= 0
  t = -pi + arctan(b/a), when a < 0 and b < 0
  t = pi/2, when a = 0 and b > 0
  t = -pi/2, when a = 0 and b < 0

You can also define it in terms of arctan(a/b), but there will still
be four cases.


Next, it strikes me as odd that you are asking about logs in order to
use acosh (which I prefer to call arccosh) so you can compute values
of Chebyshev polynomials.  My guess is that you are using a formula
like the ones on

  Wikipedia: Chebyshev Polynomials
    http://en.wikipedia.org/wiki/Chebyshev_polynomials 

that defines T_n(x) explicitly in three different ways in the ranges

  x < -1
  -1 <= x <= 1
  x > 1.

Well, the only reason that there are three different ranges is so
that, when x is a real number, you DON'T have to use complex numbers
to compute T_n(x).  Note that when x is negative, you negate it (to
make it positive) before taking the arccosh.  So if you are starting
with a real value for x, then you don't need to use complex numbers at
all.

In fact, if you allow the use of complex numbers, then any one of the
three formulas will work for EVERY complex number, so you don't need
to implement all three.  For example, you can use Euler's equation

  e^(ix) = cos(x) + i sin(x)

to prove that

  cos(x) = (1/2)(e^(ix) + e^(-ix)),

which looks a lot like the formula for cosh(x).  In fact,

  cosh(ix) = cos(x)

from which you can conclude that

  arccosh(x) = i arccos(x).

Similarly,

  sinh(ix) = i sin(x)
  arcsinh(ix) = i arcsin(x),

and you can expand

  cos(a + bi) = cos(a) cosh(b) - i sin(a) sinh(b)

and so on.  Then these formulas allow you to conclude that

  cosh(n arccosh x) = cosh(ni arccos x)
                    = cos(n arccos x).

Also, since

  cos(pi - x) = -cos(x),

we get

  arccos(-x) = pi - arccos(x)

and therefore

  arccosh(-x) = i arccos(-x)
              = i(pi - arccos x)
              = i*pi - arccosh(x).

Then when you multiply by n and take a cosh, then when n is even the
extra term goes away, but when n is odd, it doesn't and instead
negates the cosh.  Thus the third formula for T_n(x).


I should also point out that the "up to a multiple of 2*pi*i" ends up
not causing you any problems in this formula.  That's because when you
take the arccosh and introduce a multiple of 2*pi*i (or take the
arccos and introduce a multiple of 2*pi), you then multiply by the
integer n (and so you still have an integer multiple of 2*pi*i or
2*pi), and finally you take the cosh again (or cos), which doesn't
change when you add an integer multiple of 2*pi*i (or 2*pi), so there
is no ambiguity.  And this is a good thing, because a polynomial only
has one value at each location.


Finally, I would like to answer a different question, that you might
have asked instead:  How should I compute values of Chebyshev polynomials?

Well, if you are going to be computing T_n(x) for very small values of
n, then I think the recurrence relation would be the easiest way to do it:

  T_n(x) = 2x T_(n-1)(x) - T_(n-2)(x).

If n is very very large, then the explicit formulas might be best, but
they have their own problems, mainly that logs and square roots
iterated several times are difficult to compute.  So I would recommend
also considering (at least if speed of computation is important) an
alternative method, which will probably be best for mid-range values
of n (like several digits):  Giant-stepping.

Giant stepping is a name for a technique of evaluating the n'th term
of a sequence with only log(n) steps.  The sequence needs to have some
special structure, however, because you need to have a way (i.e. a
formula) for computing the (a+b)'th term when you know the a'th and
b'th terms.  Sometimes this formula also depends on a-b, but that is
not a problem for giant-stepping, because we'll only ever need to use
a-b=0 and a-b=1.  This is the idea:

First we have a method of getting from the k'th term to the 2k'th term
and the (2k+1)'st term.  To do this, we actually need to keep track of
TWO consecutive terms, and we'll work with the pair of them:

We start with the pair

  {f(k), f(k+1)},

and if we need to go from k to 2k, then we compute

  {f(2k), f(2k+1)},

and if we need to go from k to 2k+1, then we compute

  {f(2k+1), f(2k+2)},

but since

  2k = (k) + (k)
  2k+1 = (k) + (k+1)
  2k+2 = (k+1) + (k+1),

we can use our formula for the (a+b)'th term, since we already have
the a'th and b'th terms (namely the k'th and (k+1)'st terms).

Next, we use this in the following way:  Think of n (the term index
that we really want to compute) as being written in binary.  So the
leftmost digit is 1.  Let k=1 (the leftmost binary digit of n) and
start with

  {f(1), f(2)}.

Next, let k be the leftmost TWO binary digits of n.  Then that is
either 2k (if the new digit is 0) or 2k+1 (if the new digit is 1).  So
you use the method I described to jump to the new {f(k), f(k+1)}. 
Then you add one more binary digit to k, and repeat.  You keep doing
this until you get all of the binary digits of n, and then you have f(n).

This is how modular exponentiation is done, for example.  In that case,

  f(n) = b^n (mod m)

and the addition formula is

  f(a+b) = f(a)*f(b) (mod m).

In your case, you have

  f(n) = T_n(x)

for some fixed (real or complex) number x.  I'm not sure if there is
an addition formula for this sequence, but I know that there is if you
also use the Chebyshev polynomials of the second kind, U_n(x).  Recall
that T_n(x) and U_n(x) are polynomials defined by

  T_n(cos t) = cos(nt)
  U_n(cos t)sin t = sin((n+1)t).

Using these formulas and some trigonometric identities, it is not hard
to prove that

  T_(2n)(x)   = 2*T_n(x)^2 - 1
  T_(2n+1)(x) = T_n(x)*T_(n+1)(x) - U_(n-1)(x)*U_n(x)*(1-x^2)
  T_(2n+2)(x) = 2*T_(n+1)(x)^2 - 1
  U_(2n-1)(x) = 2*U_(n-1)(x)*T_n(x)
  U_(2n)(x)   = U_(n-1)(x)*T_(n+1)(x) + T_n(x)*U_n(x)
  U_(2n+1)(x) = 2*U_n(x)*T_(n+1)(x)

so you can giant-step the four-tuple (quadruple)

  {T_n(x), T_(n+1)(x), U_(n-1)(x), U_n(x)},

which means that using only log(n)/log(2) steps, with each step
consisting of only 8 multiplications and 6 additions/subtractions, you
can compute T_n(x).  Compare that to how many multiplications and
additions it takes to compute cos(n arccos x) or cosh(n arccosh x) to
the same accuracy.  Of course, when n is very very large, then the
explicit formula will still be faster, but I think not when n is of a
reasonable size.

If you have any questions about this or need more help, please write
back and show me what you have been able to do, and I will try to
offer further suggestions.

- Doctor Vogler, The Math Forum
  http://mathforum.org/dr.math/ 
Associated Topics:
College Analysis
College Imaginary/Complex Numbers

Search the Dr. Math Library:


Find items containing (put spaces between keywords):
 
Click only once for faster results:

[ Choose "whole words" when searching for a word like age.]

all keywords, in any order at least one, that exact phrase
parts of words whole words

Submit your own question to Dr. Math

[Privacy Policy] [Terms of Use]

_____________________________________
Math Forum Home || Math Library || Quick Reference || Math Forum Search
_____________________________________

Ask Dr. MathTM
© 1994-2013 The Math Forum
http://mathforum.org/dr.math/