"Bruno Luong" <firstname.lastname@example.org> wrote in message <email@example.com>... > I have not fully tested, but it should go like this: > > % lower + upper bounds > lo = [1 0 2 1 0]; > up = [5 6 4 4 2]; > % target sum > s = 15; > % number of samples > n = 1000; > > m = length(lo); > slo = sum(lo); > sup = sum(up); > > x0 = interp1([slo; sup], [lo; up], s); > if any(isnan(x0)) > error('no solution exists') > end > > x0 = x0(:); > B = null(ones(size(lo))); > A = [-B; B]; > b = [x0-lo(:); up(:)-x0]; > > % FEX: http://www.mathworks.com/matlabcentral/fileexchange/30892 > % Thank you Matt > [V,nr,nre] = lcon2vert(A,b,,,tol); > > T = delaunayn(V); > P = V(T,:); > P = reshape(P, [size(T) m-1]); > P = permute(P, [3 2 1]); > np = size(P,3); > > % http://www.mathworks.com/matlabcentral/fileexchange/9700 > % % Thank you Roger > Q = bsxfun(@minus, P, P(:,end,:)); > Q(:,end,:) = ; > vol = arrayfun(@(k) det(Q(:,:,k)), 1:np); > vol = abs(vol); > w = randfixedsum(m,n,1,0,1); > > % generate n samples of v with probability vol > vol = vol / sum(vol); > c = cumsum(vol); c(end)=1; > [~, i] = histc(rand(1,n),[0 c]); > > V = P(:,:,i); > y = zeros(m-1, n); > for k=1:n > y(:,k) = V(:,:,k)*w(:,k); > end > x = bsxfun(@plus, x0, B*y); > > % Bruno
I would be interested to know how this approach compares with straightforward rejection sampling, that is, not cutting up into simplices. If the dimension is n, the relevant subspace is defined using 2*n inequalities and one equality, so, in dimension 51 only 102 inequalities. I was thinking of the following procedure:
1. Let f be the change of scale map from Dmitrey's box B to the unit cube U. f sends Dimitrey's probability measure to the standard probability measure. 2. Let V be the subspace of B defined by one linear equality and 2n inequalities. Then f(V) in U is defined by one linear equality and 2n inequalities. 3. Use Gram-Schmidt to construct linear isometry g that sends U into an n-dimensional euclidean space Z and f(V) into the subspace z_1=0. The input data for Gram-Schmidt has its first vector v a unit vector orthogonal to f(V), followed by the standard basis, orthogonal to the various (n-1)-dimensional faces of U. (Perhaps omit the basis vector nearest to v.) 4. Using max and min, find the smallest rectangle R in Z containing g(U), and hence a rectangle S, obtained from R by setting z_1=0. S contains g(f(V)). 5. Generate a uniform random sample in S and keep only those satisfying the appropriate 2*n inequalities.
I suppose that the problem with this method is that the volume of g(f(V)) may be small compared with the volume of S. But maybe the ratio could be improved by tricks, for example "adaptive sampling"---whenever a random point p of S turns out not to lie in g(f(V)), then we can use the coordinates of p to try to chop off part of S while maintaining the rectangular shape relative to the coordinate axes.
@Roger: what were your reasons for rejecting this approach in your randfixedsum package on FEX?