Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.



deconvolution
Posted:
Mar 5, 2013 5:04 PM


Hi All, I'm writing a paper and, not being a mathematician, am having a hard time describing the concept I want to convey. I hope some experts can help with the terminology and/or articles that address this topic!
I'm estimating an impulse response function from a biological system. The way that most people in my field estimate the impulse response from this system is by providing a brief input to the system and measuring the mean response. The input is fast, about 500ms, and the output is slow, about 1020 sec. They typically wait a long time between measurements.
The problem is that this is not an optimal way of estimating the impulse response: the estimate includes the duration of the input and the estimate gets noisier with time. One solution is to use an input that has a uniform frequency profile (e.g. an Msequence). For various reasons, I can't do that. So, instead I used a set of inputs that have durations of variable lengths (uniformly distributed). Then I estimate the impulse response by convolving a set of basis functions with my input and simultaneously solving the convolution kernal for all the durations.
Ok, here is my question: I want to say that using a constant duration input to estimate the impulse response produces a noisier estimate than using a variable duration input, especially at longer frequencies. Maybe something like... as the frequency profile of the input approaches a single frequency, the noise in the estimate of the impulse response, which contains other frequencies, dramatically increases. I'm sure I'm not phrasing this very elegantly (or correctly!)... alas, I'm a biologist.
Anyway, what's the best way to say this, and are there any references that I can use? many thanks!!!!
z



