I am working on a stock selection system and have determined that I should use a neural network. The process I am thinking of is:
1. Determine processed inputs/indicators, lets call this the m x 1 vector X (one vector per time period per stock). 2. Feed processed inputs to a neural network g(X), the output of which will be a rank figure from 0 to 1 (continuous). 3. Run the g(x) for all the stocks in my universe for a given time period. For the n (say n=25) stocks with the best rank, I calculate the equal-weight (i.e. average) return of those stocks. Essentially the function to maximize is f(R,Y,n) where n is the number of stocks, R is the returns of each stock in the universe, and Y is the out put of the NN and f returns the average return of the top-ranked n stocks. More generally, one could say that f is a function of the NN, X, R and n...or f(g(X),X,R,n).
Hopefully that was clear.
So, now the problem here is how do I come up with a training process for g(X) that results in the maximum average expected value of f when f on average over time. I thought that I could use some kind of genetic algorithm in MATLAB to train the neural net, but when I look up use of genetic algorithms to train neural nets it sounds like they are training to fit the NN, not to maximize or minimize its output.
It seems important in this problem to use maximization and not "fit", because that is what the problem involves. If I were to pick the return of the best 25 stocks and train the NN to give that with best fit to it, I am concerned that I may not actually get the maximized result. (If anybody thinks I am wrong on this, please let me know.)
Any help or pointing to what MATLAB functions to use for this would be great.