Deriving Trig Functions; Taylor Series
Date: 05/01/2001 at 23:30:23 From: Harold Brochmann Subject: Deriving trig functions Suppose I want to find, *from first principles* - no tables, no calculator - for example 32 degrees... how would I go about it? If I use a formula, how is *it* derived? I have asked many math teachers. They do not know. I have never seen it in a textbook. Where to look?
Date: 05/07/2001 at 15:43:31 From: Doctor Douglas Subject: Re: Deriving trig functions Hi Harold, I think you are asking how one computes quantities such as sin(32 degrees) without using a calculator. Is that correct? A straightforward way to do this is to use the power series representation of sin(x): sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ... where the exclamation points mean factorial, e.g. 5! = 5*4*3*2*1 = 120 and the x is a number in radians, not degrees. The conversion from degrees to radians is simple: x = (number of degrees)* pi / 180. So if you plug in 32 degrees for number-of-degrees in this formula, and you know pi, you can obtain x=0.55850536 radians. Then, plugging this value for x into the series expression for sin(x), we can compute it to any desired accuracy, provided we take enough terms, and that we know x to the requisite number of digits. There is a corresponding equation for cos(x): cos(x) = 1 - x^2/2! + x^4/4! - x^6/6! + ... and one for tan(x): tan(x) = x + x^3/3 + 2x^5/15 + 17x^7/315 + ... or if you like, you can obtain it from tan(x) = sin(x)/cos(x). Real calculators could use this type of method, but it is more common to implement special routines that start from data in a "table" stored in electronic memory and interpolate between points when the x-value isn't exactly equal to an entry already in the table, or evaluate the trigonometric functions by exploiting their close relationship with complex numbers ("CORDIC" algorithms). Both of these methods can get intricate, especially if you have a demanding application that tries to optimize speed, accuracy, or both. I hope this helps answer your question. It's not exactly a complete "first-principles" calculation, since that would involve how we actually define the sine and cosine functions, but it does show you how one could compute values of these functions using enough pencil and paper and only the mathematical operations of addition/ subtraction, and multiplication/division. To explain the connection between the trigonometric functions and these series representations usually requires differential calculus. If you are curious about this, write back and we'll take it from there. - Doctor Douglas, The Math Forum http://mathforum.org/dr.math/
Date: 05/07/2001 at 21:14:33 From: Harold Brochmann Subject: Re: Deriving trig functions I would appreciate that. Harold Brochmann Saltspring Island, BC, Canada
Date: 05/09/2001 at 06:56:33 From: Doctor Douglas Subject: Re: Deriving trig functions Hi again, Harold, and thanks for writing back. The connection between the trigonometric functions and their series representation comes from the Taylor expansion of these functions. In what I say below, let's measure x in radians. Consider f(x) = cos(x). Suppose we are trying to compute this function near a = 0. We know that cos(a=0)=1, but what about a number such as cos(x=0.1)? It's probably a bit less than 1, but how close is it to 1? The answer lies in estimating the function f(x) by its Taylor expansion. We want a power series in factors of (x-a), where we can easily obtain the coefficients of (x-a)^k. If we can compute these coefficients, then we can (with a calculator, even if it doesn't have a "cos" button) compute f(x) to any desired accuracy we wish. Let the series be given by p(x): p(a) = f(a) if x = a, then p(a) better be f(a) p'(a) = f'(a) if x is near a, then make the slope of the polynomial p(x) have the same slope as f(x) there. p''(a) = f''(a) if x is near a, make the second derivatives of p(x) and f(x) equal at x=a If you extend this argument to higher and higher order derivatives, then you can see that the following power series p(x) = f(a) + f'(a)(x-a)^1 / 1! + f''(a)(x-a)^2 / 2! + f'''(a)(x-a)^3 / 3! + ... has the correct derivatives - i.e., p(a) = f(a), p'(a) = f'(a), and so on. This polynomial is called the Taylor's series for f(x) at x = a. In the case of f(x) = cos(x), f'(x) = -sin(x), f''(x) = -cos(x), f'''(x) = sin(x), and so on. For x = a = 0, all of the odd derivatives vanish, and we are left with f(0) = 1 f'(0) = 0 f''(0) = -1 f'''(0) = 0 f''''(0) = 1 f'''''(0) = 0 f^6(0) = -1 f^8(0) = 1 and so on. We see how easy it is to compute these coefficients at x = a = 0, because of the nice properties of sine and cosine. Plugging these values back into the expression for p(x) yields p(x) = 1 + 0 + (-1)(x-0)^2/2! + 0 + (1)(x-0)^4/4! + 0 + (-1)(x-0)^6/6! + ... = 1 - x^2/2! + x^4/4! - x^6/6! + x^8/8! - ... as the Taylor series for f(x)=cos(x). The derivation of p(x) for sin(x) at x = 0 is similar. Of course at x = 0, the function sin(x) and its even derivatives vanish, so that the Taylor series for sin(x) contains only terms with odd powers in x. The question of whether or not f(x) = p(x) for all x is a slightly delicate issue. There are ways of estimating the error Rn(x) = f(x)-p[n](x) that is obtained by truncating the series p(x) to p[n](x) after n+1 terms. As long as this error can be made to go to zero (say in an interval around x = a), then we say that p(x) = f(x) in this interval. In estimating f(x), we say that p[n](x) estimates f(x) with an error no bigger than Rn(x) in this interval. You can see that, subject to this issue about making the error Rn(x) small, we can make numerical computations of f(x) near x = a using p[n](x). It's instructive to illustrate this for the first few n, say for x = 0.1 and f(x) = cos(x): f(x) = cos(0.1) = 0.995004165 (from a calculator with "cos") p(x) = 1 well that was easy p(x) = 1 - (0.1)^2/2! = 0.995 pretty close already! p(x) = 1 - (0.1)^2/2! + (0.1)^4/4! = 0.995 + 0.0001/24 = 0.995004167 good to eight digits already! This shows how we can compute f(x) with fairly high accuracy without necessarily using a large number of terms, and without using anything more complicated than raising a number to a power and elementary math operations such as addition/subtraction and multiplication/division. Remark: In some cases, a single value for a suffices for the entire domain of x. This is true for f(x)=cos(x) above. The series above represents cos(x) for any x (even x not close to a=0) as long as you carry out the sum to a sufficient number of terms. Again, the issue is how large the error is after that number of terms. It turns out that for cos(x) the error can be made to go to zero as n increases to infinity, no matter what x is. Remark: You can choose different values for a. If you are looking for sin(pi/2 + 0.001), then you should choose a=pi/2, and compute all of the derivatives there, rather than a=0. This will make the sequence of polynomials p[n](x) converge to f(x) much more rapidly than if you had expanded at a = 0. - Doctor Douglas, The Math Forum http://mathforum.org/dr.math/
Date: 05/09/2001 at 11:31:26 From: Harold Brochmann Subject: Re: Deriving trig functions > The connection between the trigonometric functions and their series > representation comes from the Taylor expansion of these functions. ..... etc. Thank you for taking the trouble. It's going to take me a while to digest the setting up of the Taylor series... but this is at least a place to start. You're doing fine work. :-)
Date: 05/13/2001 at 09:17:34 From: Harold Brochmann Subject: Re: Deriving trig functions I appreciate your response. Telling me about Taylor expansion led me to find: Taylor Series - Why We Want Them and How We Find Them - James A. Sellers, Dept. of Science and Mathematics, Cedarville College http://www.krellinst.org/UCES/archive/resources/taylor_series/ which contains a terse yet more detailed explanation. You might wish to refer others with similar questions there. Thanks again. Harold Brochmann Saltspring Island, BC, Canada
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994- The Math Forum at NCTM. All rights reserved.