luca
Posts:
11
Registered:
5/19/10


Re: Sparse linear system
Posted:
May 20, 2010 8:15 PM


On 20 Mag, 23:47, Chip Eastham <hardm...@gmail.com> wrote: > On May 20, 4:15 pm, Fred Krogh <fkr...@mathalacarte.com> wrote: > > > > > > > On 05/19/2010 08:31 PM, Chip Eastham wrote: > > > > On May 19, 7:08 pm, Fred Krogh <fkr...@mathalacarte.com> wrote: > > >> On 05/19/2010 03:07 AM, luca wrote: > > > >>> Hi, > > > >>> suppose i have a matrix M X N of reals. This matrix is sparse. Every > > >>> row has only 3 non zero values (they are always 1, 2, 1). The M and > > >>> N are on the order of 600800. > > > >>> Is there a fast library (way) to solve a sparse linear system A X = B, > > >>> where the matrix A is of the type defined below?? > > > >>> I need a library writte in C/C++ that is possibly portable. > > > >>> Thank you, > > >>> Luca > > > >> It may be that you want to reduce the matrix to upper Hessenberg form > > >> using orthogonal transformations from the left and the right. Then when > > >> changing the value of the diagonal you only need to solve a matrix > > >> system with a single nonzero diagonal below the main diagonal. > > >> Fred > > > > Orthogonal transformations will create "fill" > > > for sparse matrices, in the sense that the > > > nonzero entries of the transforming columns > > > will have more of them than the original > > > matrix A. The LU factorization will also > > > cause "fill". > > > > Although there are strategies for minimizing > > > fill, we should ask if a > 0 is sufficiently > > > large that A + aI is positive definite (in > > > addition to being real symmetric). If so, > > > the use of conjugate gradient methods to > > > solve the system (A + aI)x = b should be > > > considered. The computation involves steps > > > that take one matrixvector product and a > > > few vectorvector products, so with a sparse > > > matrix each step can be be done without fill. > > > > [Conjugate gradient method  Wikipedia] > > >http://en.wikipedia.org/wiki/Conjugate_gradient_method > > > > If done in exact arithmetic, the CG method > > > converges within N steps, so theoretically it is > > > a direct method. However usually the method is > > > considered as an indirect (iterative) method > > > for approximating the solution of a large sparse > > > linear system to whatever degree of accuracy is > > > sought. > > > > regards, chip > > > Of course there is more fill in with the orthogonal transformations as > > compared with using LU. One has to decide whether that extra fill in is > > worth the saving that comes when doing later solves. (And also whether > > an iterative method is to be preferred over a direct method.) > > Fred > > You're right. The matrix sizes 600800 are not > so immense that orthogonal factorization cannot > be used, and it milks the maximum precision out > of the solution. > > On the other hand the further details OP luca > provided about the coefficient matrix A (see > above in thread) suggest even unconditioned > conjugate gradient may converge satisfactorily. > > regards, chip Nascondi testo citato > >  Mostra testo citato 
I am searching for the fastest way to solve this optimization problem (it is for a realtime application). From the discussion above, i am not sure yet if, in the semiimplicit method, the coefficent <a> is constant or has to be updated at every iteration (and how to update it). I have three options: 1] use one <a> whatever are the initiali conditions (in a offline phase, i compute the LU of A+aI) 2] given the initial problem, i compute a "good" a and the LU of A+aI; then, at every iteration, i solve a linear system of equations and check for convergence 3] i set <a> to some initial value and at every iteration i compute LU of A+aI, solve the linear system, check for convergence, recompute <a> and so on...
Since the authors of the paper i am reading suggest to use this semi implicit method, i was thinking that that was the fastest way to obtain a solution. I'll check some basic text on the solution of pde with iterative methods.
Thank you again, Luca

