Fanservice (with more Bobo)

Posted on 11/03/2010 by


My last post, in which I reviewed the smackdown picks (from the Wages of Wins Network 2011 NBA Super Stat Geek Smackdown and commenters across the web)  and ranked them, generated some interesting questions and some good requests. We’re all about the fanservice at Arturo’s Silly little Stats (particularly when we need a short post because we’re doing our day job).

For those looking for kittens

Busy day (image courtesy of

This post is going to address these questions and comply with some requests (rather than finishing my first in season rookie review, you’ll have to wait till tomorrow for that one).

Commenter Guy asks about the smackdown model evaluation :

Arturo: can you please explain what a “hit rate” means? It sounds like you are letting each system choose its own “strong buys” and then measuring the success rate. I’m not trying to be a pain here, but that is just very wrong. Imagine you had a perfect system that predicted every team’s win% exactly right. Now, take all those predictions and regress them 50% toward the mean (a .650 team becomes a .575 team, etc.) Under your system, this new set of predictions will do MUCH better — although by definition it is really worse — because it will only issue a “strong buy” when one team is actually vastly better than the other. The trick is that the new system will issue many fewer strong buys, but will almost always be right. So at a minimum, you need to tell us how many strong buys each system generates.

Much better would be something like rewarding each system with one point for every game it calls right, with a one point bonus if it was a strong prediction. Or measure everyone against the Vegas line as common benchmark. But you can’t let the system decide how many strong buys it issues, and then just rate by success rate.

A good and fair question (and request) this. Let’s review, clarify and expand:

The Picks

The Method for Evaluation

To evaluate how the predictions are doing, I take everyone’s raw wins predictions for each team and combine it with the equation for home team winning I came up with for a single game (see here for detail). To put it simply:

Probability of Home team winning a game (Win %)

= (Projected Wins Home Team-Projected Wins Road Team)/82 +.606

=Win %: (Proj. Home Team Win% – Proj. Road Team Win%) +HCA(.606)

Then I worked it out for every game so far as follows:

  • If W% is greater than 60% call it a strong Win (W) for the home team
  • If W% is greater than 50% but less than 60% call it a weak Win (WW) for the home team
  • If W% is greater than 40% but less than 50% call it a weak Loss (WL) for the home team
  • If W% is less than 40% call it a strong Loss (L) for the home team
  • I then look at everyone’s hit rate for strong predictions and for all predictions.
  • I rank each model/analyst for both
  • I assign points based on ranking (double points for strong predictions since I value those more)
Guy’s question goes to what hit rate means. It’s simply:
Hit Ratio (All): % of all games picked correctly by Win % (so for all 52 games this year)
Hit Ratio (Strong): % of all games were the win % is >60% or less than 40% picked correctly (on average about 33 games per model)
Based on that the results thru yesterday (46 games) looked like:


But I’m intrigued by Guy’s Request. If I:

  • Assign a point per game called correctly
  • Assign an additional point for a strong call
  • Penalize 1 point for a miss.
and redo the tables (updated thru 11/02/10), it looks like:
Take a bow reservoirgod. The WOW analysts are still leading the pack.  The Sports Guy is moving up past the other  models and Bobo is very angry.
Finally, commenter Neal Frazier asks:
Will you be asking any blog that looses to Bobo the Monkey to leave the WOW network in shame?
I don’t think we’ll have to ask :-)

Posted in: Uncategorized