Begin with a model of the form Y=a*X^b and transform it to Ln(Y) = Ln(a) + b*Ln(X) and estimate Ln(a) and b via least squares regression. Calculate a = Exp(Ln(a)). Then calculate predicted values of Y from Y = a*X^b for the available data. The sum of the predicted values will not be the same as the sum of the observed values... the residuals will not sum to zero. Predictions from the completed model are biased. The amount of that bias depends on several factors... the bias can be trivial or it can be annoyingly large. If the data fit the model "impressively" then the bias is probably not a big deal. Nonetheless, there will be a bias. I find this to be of much more concern than the distribution of the "errors". In some recent work with certain air pollution data I've found the average bias to be in the range 20% - 40% of the observed values, albeit this was very noisy data.
This "statistical bias" usually comes as a suprise to those who buy exotic software and set themselves up as experts because they know how to use SmartStat or whatever they bought.
This is another instance where a trip to Monte Carlo can be educational.