I'm implementing a statistical algorithm that solves a maximization problem iteratively.
After convergence, the procedure generates a "negative hessian" matrix I'll call -H.
If the algorithm converged at a local max., the eigenvalues of -H are supposed to be all positive. The inverse of -H can then be used as a parameter covariance matrix.
My problem is this: The algorithm converges, and -H (which is of approx. dimension 35x35) has ALMOST all positive eigenvalues. The largest is about +1e4, ranging down to +1e-5, but there's ONE that's -1e-9. Inverting -H yields a couple of negative parameter variances, which are uninterpretable.
My guess is that I do in fact have a max, but that rounding error etc. is responsible for this tiny negative eigenvalue. I've tried to fiddle with some precision parameters but can't get rid of it.
I THINK that what I'd like to do now is to find a perturbation of -H that's very small, but large enough to make it positive-definite so that all my variances will be positive. Anyone know how to do this? Does my suggestion make any sense?