Drexel dragonThe Math ForumDonate to the Math Forum

Ask Dr. Math - Questions and Answers from our Archives
_____________________________________________
Associated Topics || Dr. Math Home || Search Dr. Math
_____________________________________________

Two Methods for Finding an Optimum Point in 3D Space

Date: 10/24/2005 at 08:40:46
From: Baris
Subject: Finding the optimum point of not-intersecting lines.

Dear Dr. Math,

I have more than two lines in 3-space.  They won't intersect because
of measurement errors, but they _should_ intersect.  How can I find
the optimum intersection point so that the error is minimized?

I want to express the solution in Ax = 0 form (A and x are matrices).
And I think i will be able to solve it by using SVD (Singular Value
Decomposition).

Let a line be expressed as: ax+by+cz+1 = 0  thus (a,b,c,1).  Then a
point (x',y',z',1) exists in space that is the optimum:

  [a1 b1 c1 1]  [x']
  [a2 b2 c2 1]  [y']  = 0
  [a3 b3 c3 1]  [z']
    ....        [1 ]
  [an bn cn 1]

Am I right?  Thank you.



Date: 10/24/2005 at 11:35:35
From: Doctor George
Subject: Re: Finding the optimum point of not-intersecting lines.

Hi Baris,

Thanks for writing to Doctor Math.

You are on the right track, except that ax+by+cz+1 = 0 is the equation
of a plane.  Since you are expecting to use SVD, it appears that you
are looking for a point that is optimal in the least squares sense.

Here is how I would approach this.  Consider the line that contains
(x0,y0,z0) with direction vector (a,b,c).  The squared distance from
(x,y,z) to any point on the line as a function of parameter 't' is

  D^2(t) = (x-x0-at)^2 + (y-y0-bt)^2 + (z-z0-ct)^2

Now take the first derivative and set it equal to 0.

  -2a(x-x0-at) - 2b(y-y0-bt) - 2c(z-z0-ct) = 0

  t = [a(x-x0) + b(y-y0) + c(z-z0)] / (a^2 + b^2 + c^2)

For simplicity let's assume that a^2 + b^2 + c^2 = 1 so that

  t = a(x-x0) + b(y-y0) + c(z-z0)

This value of t gives us the point on the line at which the distance
(or squared distance) is minimized, which is just the projection of
the point onto the line.

Now each line gives us three equations, one for each component.

  x - xi - a1 * ti = 0
  y - yi - b1 * ti = 0
  z - zi - c1 * ti = 0

If we substitute for each ti value and do a litte rearranging we can
come up with a matrix equation of the form Ax = b, where vector x
contains the components of a point.  Now we can apply SVD to get the
optimal solution.

Does that make sense?  Write again if you need more help.

- Doctor George, The Math Forum
  http://mathforum.org/dr.math/ 



Date: 10/26/2005 at 15:02:46
From: Baris
Subject: Finding the optimum point of not-intersecting lines.

Dear Dr. Math,

Thank you very much for your excellent answer.  I thought you might be
interested to know that I found a better way to do this for
computational purposes.

I will use Plücker matrices to represent a line.  A line is
represented by a 4x4 skew-symmetric homogeneous matrix.  The line
joining two points A, B is represented by the matrix:

  L = ABt - BAt           (t means transpose)

Dual Plücker representation L* is obtained from the intersection of
two planes P and Q:

  L* = PQt - QPt

It can be simply obtained from L by rewriting L:

  l12 : l13 : l14 : l23 : l42 : l34 

    = l*34 : l*42 : l*23 : l*14 :l*13 : l*12

where lij's are the elements of L and l*ij's are the elements of L*.
After rewritng, be sure that L* is skew-symmetric.

Now it is time for our problem.  L* X = 0  if, and only if, X is on L.
We can construct our Li* for all lines and perform SVD.

It may sound a bit complicated but believe me it is very easy.



Date: 10/27/2005 at 12:04:17
From: Doctor George
Subject: Re: Finding the optimum point of not-intersecting lines.

Hi Baris,

Thanks for the additional input.  I am not very familar with Plücker
coordinates, but I follow what is happening.  Our conversation may
become part of the Dr. Math archives, so for the benefit of others I
will explain a little further.

My original analysis is correct, but as you found, not efficient.  The
computations can be simplified.  Here is why.  Look back at this
system of equations.

  [ ai^2-1  ai.bi    ai.ci ] [x]   [ ai^2-1  ai.bi    ai.ci ] [xi]
  [  ai.bi  bi^2-1   bi.ci ] [y] = [  ai.bi  bi^2-1   bi.ci ] [yi]
  [  ai.ci  bi.ci   ci^2-1 ] [z]   [  ai.ci  bi.ci   ci^2-1 ] [zi]

Clearly the point (xi,yi,zi) solves the system, but so should any
other point on the line.  However, the system is the intersection of
three planes, so this is suspicious.  To have any point on the line
solve the system, it must be that the planes are not linearly
independent.  A check of the determinant shows that it equals 0.  All
three rows of the matrix are perpendicular to (a,b,c) because each of
the three planes contains the line of interest.  The SVD will still
find the optimal solution for the system with 3n rows, but it would be
better to have fewer equations in the system.

The way to reduce the system is to replace the three planes with two
perpendicular planes whose intersection is the line of interest.  By
the Pythagorean Theorem, the sum of squared distances from any point
to two perpendicular planes is equal to the square of the distance
from the point to the intersection line.  Let's call the two
perpendicular planes

  a'x + b'y + c'z + d' = 0

  a''x + b''y + c''z + d'' = 0

where a'^2 + b'^2 + c'^2 = 1 and a''^2 + b''^2 + c''^2 = 1.

The squared distance from a point to the first plane is just (a'x +
b'y + c'z + d')^2, so our optimal point is the least squares solution
to the following system.

  [a1'  b1'  c1' ] [x]    [-d1' ]
  [a1'' b1'' c1''] [y]    [-d1'']
  [a2'  b2'  c2' ] [z]    [-d2' ]
  [a2'' b2'' c2'']        [-d2'']
  [ .    .    .  ]    =   [  . ]
  [ .    .    .  ]        [  . ]
  [ .    .    .  ]        [ .  ]
  [an'  bn'  cn' ]        [-dn']
  [an'' bn'' cn'']        [-dn'']

It looks like the Plücker coordinates provide a convenient technique
for constructing perpendicular planes.  Any technique will do, but you
seem to be doing something like the following.

If we know points P and Q on a line with direction vector L = (a,b,c),
then PxQ and PxQxL are vectors that can be normalized to produce
(a',b',c') and (a'',b'',c'').  We then find each d' from a'xi + b'yi +
c'zi + d' = 0, and d'' likewise.

Thanks for an interesting problem!

- Doctor George, The Math Forum
  http://mathforum.org/dr.math/ 
Associated Topics:
College Linear Algebra

Search the Dr. Math Library:


Find items containing (put spaces between keywords):
 
Click only once for faster results:

[ Choose "whole words" when searching for a word like age.]

all keywords, in any order at least one, that exact phrase
parts of words whole words

Submit your own question to Dr. Math

[Privacy Policy] [Terms of Use]

_____________________________________
Math Forum Home || Math Library || Quick Reference || Math Forum Search
_____________________________________

Ask Dr. MathTM
© 1994-2013 The Math Forum
http://mathforum.org/dr.math/