@niplav Let p = the probablity you assign to the true outcome, and q = the probablity you write down. Then your expected score is s(p,q) = p f(q) + (1-p) f(1-q) where f(q) is the score awarded when assigning a probablity of q to the correct outcome.
If the scoring rule is proper, then the derivative would be 0 when p = q. This is not the case for points based on the log odds: https://www.wolframalpha.com/input?i=d%2Fdq+%28p+ln%28q%2F%281-q%29%29+%2B+%281-p%29+ln%28%281-q%29%2Fq%29+%29%3D+0
@niplav Specifically, it incentivises extremising: e.g. if you think something has a 60% chance of happening, you would want to predict 0.9999..., because that would give you a zillion log points if it happens, and negative a zillion log points if it doesn't, for an expected 0.2 zillion log points.
@TetraspaceGrouping yeah, I looked at whether the log odds of the squared probabilities are proper, but that has the same problem
The search continues (I want this for Range and Forecasting Accuracy, because linearly extrapolating the brier score doesn't work)
@TetraspaceGrouping hm, that's too bad
Thanks for checking!