I'm stuck. I've reached the limit of my meager math skills. I could use some help.
I am planning a little football pool for a Super Bowl party. Each person will submit predictions for the score at the end of each quarter. I will construct a spreadsheet to calculate the winner.
My plan is to award 10 points for each correct prediction and something less for incorrect predictions according to how close they are. For example, if the actual score is 14, a guess of 14 would get 10 points, 13 or 15 might get 9 points, 12 or 16 might get 8, and so on.
In this table, the points are discounted from a maximum of 10 by one point for each point of error (linear discount).
I didn't like this method for a couple of reasons. (1) I don't want negative points, so all guesses that are off by more that 10 points get zero. (2) There is a one point penalty for missing by one point a score of 3 (33% error) and one of 50 (2%) error.
An exponential decay function looked like a good choice. I chose to use it in the half-life form:
pts = MP * 2^(-error/h)
pts = The points to be awarded MP = The maximum points (10) error = The error in the guess (abs(score-prediction)) h = The half life parameter.
A 1-point error when the score is 21 gets a 0.32 point penalty, whereas for a score of 3, it gets 2.06 point penalty. I don't know if this mathematically sound, but it feels about right. A 1-point error for a score of 21 is 4.76%. A 0.32 point loss from 10 points is 3.25%, which is comparable. The two percentages are slightly less comparable when the score is "3".
The problem arises when the score is zero. This results in a division by zero error (-error/h).
I am not sure how to solve this one. I would appreciate some help.
One idea I had is to add something to the halflife calculation.
h = ActualScore + n
I have no idea how to calculate "n".
Another idea I had was to discount the points in the same proportion as the predictions.