Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math.num-analysis.independent

Topic: Please explain Gram-Schmidt Process
Replies: 1   Last Post: Dec 2, 1996 12:46 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View  
Ronald Bruck

Posts: 172
Registered: 12/6/04
Re: Please explain Gram-Schmidt Process
Posted: Dec 2, 1996 12:46 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

In article <32A0DCB6.3B23@pe.net>, Gordon Talge <gtalge@pe.net> wrote:

> Robert Gelb wrote:
> >
> > Could someone please explain the Gram-Schmidt Process in regards to the
> > orthogonalization process (inner spaces)?


...

> As a Mathematics graduate from CSULB, I seem to remember that the Math
> Department teaches a class on Approximation Theory.
>
> 1) Look up who teaches it and get them to explain it to you.
> 2) OR check with Dr. Warner or Dr. Maryfield in the Math Dept.
> 3) OR check out from the library:
> Introduction to Numberical Analysis by F.B. Hildebrand (QA297.H54
> 1974) and read Chap 7 Least-Squares Polynomial Approximation and
> note 7.13 Gram Approximation. There are other books, but this should
> get you started.


Perhaps it would be more helpful to explain the result geometrically. The
idea is this: given a finite-dimensional linear subspace L of an
inner-product space and a point w not in the subspace, there is a unique
closest point Pw in L to w, which satisfies w - Pw \perp L. In particular,
if the linear subspace has an orthonormal basis u_1, ..., u_n, then Pw must
be of the form \sum a_i u_i, hence

<w - \sum a_i u_i, u_j> = 0 for all j, (since w - Pw \perp u_j)

i.e. <w,u_j> - a_j = 0. Thus Pw is computable as \sum <w,u_i> u_i.

The Gram-Schmidt process proceeds as follows: given vectors v_1, v_2, ...,
v_n, (presumably linearly independent, although the algorithm will catch
you out if you apply it to linearly dependent vectors), construct
orthonormal vectors u_1, u_2, ..., u_n such that

span { u_1, ..., u_k } = span {v_1, ..., v_k} for 1 <= k <= n.

This is easy to do for k = 1; just take u_1 = v_1/||v_1||. Once you've got
it done for one value of k < n, to get it for k+1, consider the projection
Pv_{k+1} of v_{k+1} on the span of {u_1, ..., u_k} (computed as above), and
take u_{k+1} = (v_{k+1} - Pv_{k+1}) divided by its norm.

This is the "unstable" direction of computation, incidentally.

--Ron Bruck





Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.