*My first move as the manager of the machine shop was to introduce standardized work. –Taiichi Ohno father of the Toyota Production System*

Quality over Quantity. Consistency. These are the hallmarks on which truly excellent organizations are founded. The centerpiece of the Toyota Production system is the reduction of variability. Variability leads to waste and loss and that is anathema to true success.

This is going to get very real, very fast. Before we start sling tables and numbers, feel free to go read and review the Basics.

Success in the NBA should be no different. I‘ve talked before (see here for example) about the value of consistency night in and night out. Where I a GM, a player/employee who consistently delivers night in and night out would the ideal employee. One of my priorities would be to have a system to evaluate total productivity and productivity variation for all the players on my roster. Luckily, Wins Produced provides just such a framework for this analysis. The quicker I can complete this analysis and determine the value and potential of my roster, the better my edge against other GM’s in trades. My enemy here is time and a small sample size. I want to reach valid conclusions on player talent ahead of the market but I am aware that the quicker I reach conclusions, the larger my error will be.

This post will focus on two things: Player variability/reliability and the size of error introduced by sample size. I want to rate the players and I want to know how quickly I can do it. For this I’m going to need a hell of a lot of data. Luckily I have Andres Alvarez, and his mad skills ( ** All Powered by Nerd Numbers**) at my disposal. Andres went out and did splits for every player and every game for last season. Did you know that 442 players combined for 24796 individual games played last season in the NBA last season? Now you do thanks to Andres J. With all this Wins Produced data in hand for the 2009-2010 Season, I can go off and do an analysis of value, variability and predictive value of the numbers by chronological sample size.

Now, not every game qualifies for this analysis. To qualify as a sample, I’m requiring ten minutes played for a game. For the player to qualify for the ranking/evaluation sample, I want at least 20 game samples and 800 minutes played. For the correlation analysis, I’m going to use players with at least 50 games samples. This leaves 232 players and 16616 game samples for correlation (all players for 2009-2010 with at least 50 games with >10 MP, Avg is 71 games) and 286 players and 18936 game samples for the Reliability Value (or Employee of the Year) rankings (Minimum 20 Games with >10 Minutes Played & >799 Minutes total). Enough with the talking, let’s get to variability.

**How good is that sample in the internet?**

I’ve said before that the beginning of any season is a tantalizing time full of promise, expectations but mostly questions. Will my favorite team/player be better or worse than expected? Will a team’s surprising/disappointing start prove to be a mirage or be sustained thru 82 games? How fast can we start jumping to conclusions? For the media who has no real conscience or memory, the answer can be measured in nanoseconds.

For the hopefully rational group of people that are my readers, this is a much tougher question. We know of things like the **law of large numbers** (**LLN**). As the number of samples in a data set increases we will get closer and closer to the real value of something and conversely, the error (or more accurately the possibility of it) gets larger and larger the smaller the sample. So rushing to judgement based on a small sample is premature. A larger sample size is called for before we can make any solid conclusions.

We know that already. Frankly just saying get a larger sample size is a little fuzzy for my tastes. Luckily, you know me, data, excel and math by now. If there’s some sort of answer to be found, I’m going to give it the ol’Harvard try.

Let’s take a look at last year’s game data. We’ll look at numbers for the full year and for chronological game samples for 5,10,15,20,25,30 and 40 games. We’ll look at:

- Raw Productivity (ADJP48) correlation to final Season Number: This is how closely the sample correlates to the final full season number for the player
- Average Total Error in ADJP48 from Sample to total for season (ADJP48): This is the difference from the sample value to the final full season number for the player expressed in Raw Productivity per 48 minutes (ADJP48)
- % Avg. Total Error in ADJP48 from Sample to total for season: This is the difference from the sample value to the final full season number for the player expressed as % of Final season number
- Avg. Absolute Error in ADJP48 from Sample to total for season (ADJP48) : This is the absolute difference from the sample value to the final full season number for the player expressed in Raw Productivity per 48 minutes (ADJP48)
- % Avg. Absolute Error in ADJP48 from Sample to total for season: This is the absolute difference from the sample value to the final full season number for the player expressed as % of Final season number
- Std Deviation of Raw Productivity (ADJP48) correlation to final Season Number: This is how closely the sample variation correlates to the final full season variation for the player
- % Avg. Absolute Error in stddev of ADJP48 from Sample to total for season: This is the absolute difference from the sample variation to the final full season variation for the player expressed as % of Final season variation

Table Time!

Fascinating. Let’s analyze.

The second column tells us that once we have a 15 games sample the correlation is above 75% (which is good). 20 games is above 80%, 30 gives us 90% and 40 games is almost a lock at 94%. So at this point in the season player productivity for the full year can be predicted with about 70% accuracy (depending on sample size). In two more weeks this should be close to 80%.

In terms of overall error (column #3 & #4), we do not see a lot of variation. This means that league wide productivity and things like position adjustments and replacement levels for players can be very accurately set with a very small sample (5 games yields a 2% variation).

As for absolute error (columns 5& 6), we see a similar story as with the correlation data. Right now you’d expect player productivity variation for the rest of the year to be about 15% to 20%. By the middle of december, this’ll be down to about 10%.

For the actual variability (columns 7 &8), the results are a little different. Correlation increases more linearly the larger the sample. However for absolute population variation the percentages track absolute error. So at this point you have a fair idea of a player game to game variation.

So to synthesize, at this point in the season there’s about a 30% uncertainty in the numbers (assuming the data follows the 2009-2010 pattern but this is a safe assumption). By the end of the year this’ll be down to 15% and by the All star break to 5%. I expect this might be improved by eliminating injured players and rookies.

That covers the hard math portion of our program. Let’s do some fun rankings!

**Employee of the Year for the NBA in 2009-2010**

A lot of you out there are going through your own year-end evaluations. Hopefully, you feel these are a fair assessment of your contribution to the success of your enterprise.Your value and your consistency was measured and compared to your peers and you we’re rated fairly in comparison to your peers. So the total opposite of the typical All-NBA Balloting.

What I will attempt here is to evaluate players based on the guidelines set above. I’ll look at numbers for 2009-2010 : WP48, Wins Produced, WP48 Std dev, WP48 I can expect 85% of Time. I’ll rank each player in each category and average the ranks. Player with the lowest average rank get the overall highest rating. If you remember we have 286 players and 18936 game samples for the Employee of the Year rankings (Minimum 20 Games with >10 Minutes Played & >799 Minutes total).

Table time again:

So Lebron James in a landslide. the top ten is rounded out by: Jason Kidd, Rajon Rondo, Mike Miller, Andre Iguodala, Pau Gasol, Al Horford, Ben Wallace, David Lee and Steve Nash.

The bottom ten reflects guys who should not by any means be on your team (sorry Mr. Pargo, Rasheed and others but ball don’t lie)

If we use this to do my own All NBA teams we get:

Thabo Sefolosha was a huge surprise in the first team. Zach Randolph for the second team. But I guess together with everyone else on this list, they got the job done night in and night out.

WP48 |

Wins Produced |

Std dev |

>WP48 85% of Time |

Worst Game |

Best Game |

Rank WP48 |

Rank Wins |

Rank Variability |

Rank Worst Day |

Avg Rank |

Rank |

*Uncategorized*

nerdnumbers

11/18/2010

You realize if you want me to shut up about Andre Iguodala you can’t put him top five on your list! Podcast topic #1 for next week! Consider it marked. Awesome work as always. I’m confused though, shouldn’t Kobe be on your first team not Gasol :p

some dude

11/18/2010

I dunno what Kobe’s career numbers would look like, but dude had a horrible month after his finger got messed up again and his variability and WP48 took a dive then. Us Laker fans know he wasn’t his normal Kobe last January, so we’re cool with Pau being there. In the playoffs he returned to form, though. :D

some dude

11/18/2010

Those are some interesting results. Dwight Howard is top 5 in WP48. WP, and Worst Day, but 199th in variability. Basically, he bounces around from average to elite every other game. How strange.

It would be interesting to rank the stuff by other categories. Like most variability for usage groupings (ie >27% usage, 20-27%, etc). Most variability for each WP48 grouping (>.3, then .25-.3, then .2-.25, etc).

My eyes seem to be telling me ball distributer are the least variable and the shooters and rebounders are the most variable players. I wonder if that’s right.

Great work. Now the next step is finding the variability in these players (and your ranking) when only up against the top defenses in the league. Say, top 8.

arturogalletti

11/18/2010

SD,

I actually think a cool next step is to id the top 8 d’s using the splits.

some dude

11/19/2010

What do you mean? I figured deffensive efficiency would be good enough.

arturogalletti

11/19/2010

I can actually work out raw win production for the opposition for each team on average and by opponent using splits. I can then do a defensive ranking based on this (i.e. A player’s numbers go down 10% when facing opponent A).

Raspu10

11/19/2010

Great stuff, now we’re really getting somewhere. :) This all brings up larger question regarding consistency, such as a preference for the consistently bad over the variably brilliant, when that’s your basic choice. Not surprisingly, the topic has come up consistently regarding Vladi’s playing time. But as of yet, it has only come up in fan discussions – no mainstream medium has touched upon it.

EvanZ

11/19/2010

So, this raises the following question: If 12 games into the season, there is about 70% correlation to final WP numbers, shouldn’t we be able to put some upper and lower bounds on team win totals now?

arturogalletti

11/19/2010

Actually, yeah. Barring injuries and trades a projection should be possible. The big trick will be minute adjusment patterns (i.e. Rambis not playing Love)

Neal Frazier

11/19/2010

In addition to Howard; Boozer, Durant, C.Paul, G.Wallace, & Duncan all have their values decimated by variability – do we really want a metric that puts Al Jefferson ahead of Duncan and Jonas Jerebko ahead of Dwane Wade because they are consistently bad and Duncan/Wade are inconsistently great?

I think variaility might make since as a factor when comparing players in tiers, but makes no since in putting players into tiers (if that makes sense).

arturogalletti

11/19/2010

Neal,

I think part of what we’re seeing is the effect of age and injury (certainly for CP3 and Duncan). The metric is basically saying is that night in and night out the top guys on the list were the most likely to earn their paycheck.

Someone like say Iggy or Noah was more likely to be able to do the things that made him valuable against anybody and that has real value.

Bill Gish

11/19/2010

I apologize for not letting go of this, but yesterday I had a question about Gerald Wallace and whether small sample size accounted for his current ranking. You replied:

Now I’m looking at your new table of a measure of variability/consistency and see that Wallace’s high variability drops him to 35th on your list.

That’s not shabby!

But he ranked 5th in Wins Produced with 19.4!

So I join Neal in thinking “variability might make since as a factor when comparing players in tiers, but makes no since in putting players into tiers”

In fact someone like Wallace might be winning you games you have no business winning by going off on a given night.

I can already hear the counter argument that he might lose you games you should have won on the days he disappears.

In any case he presents a nice test of your thesis.

Dan Rosenberry

11/19/2010

How does this turn out if you check correlations to the rest of the season versus the full season?

For trade deadline purposes, that’s the correlation that matters. It doesn’t matter if they’re a star for the year including before the trade, but how they perform strictly after. A regression with a couple years prior stats and up to point X. How much confidence is generation for the rest of the season after X? Is this year’s bump for real after X games of > 10 MP?

arturogalletti

11/19/2010

Dan,

Another cool followup post!

ilikeflowers

11/19/2010

Off topic,

Can we use the numbers to categorize players within positions? I’m thinking that there are three basic types at each position: possessors (at least 2/3rds of production from possessions), scorers (at least 2/3rd of production from scoring), and possessor-scorers (everyone else). Having a way to categorize them would assist in team construction. At some point we might even be able to quantify the synergy (deviation from expected performance) generated based upon different combinations of typed positions in a statistically significant way. The most fundamental question is: is it better to have a team (or frontcourt, backcourt, etc…) of complimentary specialists who maximize the natural advantages of each position (i.e. pair a scoring C with a possessor PF or vice versa) or is it better to have balance (possessor-scorers) at every position. I have no idea what the answer to that question is. I would tend to go with the team of specialists but there is a strategic advantage in the uncertainty that a team with complete balance has when it comes to trying to defend them and in overcoming bad nights by one of the parts.

EvanZ

11/19/2010

However a team can produce high WP48 is what matters, and I think it’s highly unlikely that a team could do that without being balanced. In that regard (i.e. controlling for WP48), we sort of already know that the answer should be that it doesn’t matter (at least, in the regular season) whether the team is made up of specialists or do-it-all players.

ilikeflowers

11/19/2010

How do we know that? Where is the evidence for that conclusion? Maybe you’re missing my point. What I’m getting at is trying to deal with diminishing returns and its opposite – synergy or whatever you want to call it. If we take a bunch of players at each position who have similar wp48′s but arrive at them in very different ways and we play all of the different combinations, some of these teams are probably going to over and under perform consistently (due to diminishing returns and its opposite) and this will be captured by the individual players’ wp48′s. There is a difference between a team that is balanced and a team that is made up of balanced individuals. There are many different combinations of the prior. The Bulls with Rodman plus anonymous crappy center had a frontcourt of possessors which they balanced out with scoring elsewhere are one example, Camby with a scoring PF would be another. You need to maximize your productivity and limiting diminishing returns (and perhaps enhancing productivity) with the right combinations is a way to do that. Diminishing returns affects what 5-10% of productivity? If you have two very productive players on the frontline or in the backcourt who need the ball in the same way that can become a significant effect.

EvanZ

11/19/2010

My point is that you have to have a way to compare two theoretically equivalent teams, in terms of WP48, one built with 5 balanced individuals and the other built with specialists. How would you propose to do that experiment?

ilikeflowers

11/19/2010

The team level stuff is probably too difficult a place to begin, but I’d start with frontcount pairs. You’d first have to assign each player a sub-position then you’d have to track players as they went from playing with mostly one type of player to mostly another. The hard part is getting the data on the pairings, but plus-minus uses this sort of data so it must be out there. The analysis itself doesn’t sound too difficult, but it’s not trivial. It’d require a large number of seasons since we’re getting into the same sample size issues that plus-minus has (5 plus years seems to be the minimum), compounded by the fact that we’re trying to tease out a fairly small variation from the standard random (we assume) performance fluctuations. It’s really an analysis of which combos minimize diminishing returns with the possibility of discovering enhanced returns on the side. In the end it may not matter or there may be too much noise but it’s a critical question that could be answered well enough. It’s the type of thing that I’d love to look into if I had the time.

nerdnumbers

11/19/2010

Ilikeflowers,

Glad to see you around more! I did a bunch with this a while back and will start revisiting it:

http://nerdnumbers.wordpress.com/2010/07/23/thursday-morning-musing-2010s-top-role-players/

Top players tend to be good at multiple things. I would be curious at super role players but even looking at MVP candidates this year I found that really good players tended to be balanced on offense and defense. In short I don’t think many teams have to worry about say a Big Ben playing next to a Roddman or a Ray Allen playing next to Miller.

ilikeflowers

11/19/2010

The minute after I posted this I realized that it was really more appropriate for your site. I check the network everyday, I’ve just cut back on my postings in general ;^). Keep up the great work guys it’s very appreciated!

Brandon

11/19/2010

Awesome article!…any way you can put the final player list in a table/spreadsheet for quick searching?

arturogalletti

11/19/2010

I just might be able to do that tonight:-)

bduran

11/19/2010

It would be interesting to see how these translates to player performance in the post season. You would think the msot constistent would perform better, but maybe the highly variable players are variable because they take it easy too often in the regular season.

Guy

11/19/2010

I think what you’re seeing with the high variability players is the heavy weight that WP assigns to rebounds, first, and also to assists. These likely vary more from game to game than shooting efficiency, and since they play such a big role in WP the players who do well in those categories will have a lot of variance. If you sorted players based on the proportion of their WP derived from rebounds, I think you will find that correlates pretty well with variability. The same may be true for assists. I’d guess that shooting guards and low-rebound forwards will have the least variability.

mick

11/20/2010

it is truly unbelievable how much Jason Kidd produces night in night out – the perception in the media certainly seems to be that he was Hall of Fame but is now a good piece. Its almost as if you can’t mention his name without saying he gets burned by quick guards (who doesn’t in the league?). I don’t ever hear his name come up as one of the elite or even top 5 point guards yet the metrics certainly suggest he is one of a handful of top flight players in the game. Arturo, this work is fantastic and really allows me to watch basketball and enjoy many of the intricate details of the game that actually produce wins.

chico

11/20/2010

awesome work arturo, i think this analysis is fantastic to find those mid tier players that wont demand a massive salary but will produce consistent effort and results night in night out for example sefolosha, jerebko, haywood these guys would be fantastic value for playoff teams with star players as you know they will play their role and produce regularly. As we all know playing to potential can more easily be achieved against the lower ranked clubs but how do these players perform against the best, in the playoffs when it counts?