[Hoop Pointers logo]

Getting What You Pay For

January 23, 1998

Have you ever bought a player, expecting a certain amount of Smallworld Point (SWP) production, and then been disappointed by a game that was way below average? I know I have. Does stability in game-by-game performance add value? Sometime yes, sometimes no.

It's maddening when you buy a player, hoping for a few games of reasonable production, or even looking for a price increase to a level which reflects that expected point production - only to have that player deliver a few lousy games. I remember recently buying Donyell Marshall, who was averaging around 35 SWP per game at the time, and then watching him produce only 1 SWP vs. Detroit, followed the next night by a 22.5 SWP performance against the Bulls. Arghh!. Performances like these, even if only sporadic, can mess up your trading strategies.

I recently wondered whether there are noticeable differences in the game-by-game stability among the active players. There are a number possible ways to measure this, even though no method seems totally appropriate. My first thought was to compare the standard deviations of each player's game-by-game SWP production. Standard deviation is a statistical measure of the how volatile, or "scattered", a series of numbers is. For example, consider two players, "A" and "B". Player "A" produces SWPs of 29, 31, 30, 31, and 29, while player "B" delivers 10, 20, 30, 40, and 50. Both players are averaging 30 SWP/game. However, player A will have a very low standard deviation (because his games are all clustered tightly around the average of 30), while player B's standard deviation will be relatively high.

One problem with using standard deviation as my measure is that it doesn't distinguish games which are well above the average from games which are well below average. You would be very bothered if you bought player B just in time for his 10 SWP game, but you would not have a problem with an unexpected 50 SWP performance. (The standard deviation measures each of these results equally, as 20 points away from the average.) There are alternative measures which would handle this issue more consistently with our intuition. However, I decided the additional complexity was unwarranted for this analysis. (Hey, we're only evaluating basketball player performance, not developing a navigation system for the Starship Enterprise!) And don't worry, I'm done with the abbreviated lesson in statistical theory. I'll make an effort to keep the rest of this discussion user-friendly.

So, using standard deviation as my volatility measure, I calculated each active player's volatility over the entire season, as well as over just the last 20 games. I thought that the more recent performance might be more indicative of future expectations, since early season performance might have been more volatile as players got accustomed to new teams, new situations, or just playing regularly again. I also looked at the ratio of the standard deviation to the average, since I figured a 10 point standard deviation was a lot more significant for someone averaging 15 SWP/game than for someone averaging 45 SWP/game.

As a baseline measure, if I consider just the 200 or so most active players, they average roughly 25 SWP per game, with a standard deviation of 10 SWP per game, or 40% of the average. I'm less interested in this as an absolute statistic, though, and more interested in looking for players who were significantly better or worse than the average.

Several players stood out as significantly more stable, with standard deviations that were close to 20% of their averages - about half as volatile as the league average. Karl Malone was far and away the most stable, with a full season standard deviation of 7.5 on an average of 44, for a ratio of 17%. His last twenty games have been even more stable. Other players showing good stability (in no particular order) were Tim Duncan, Michael Finley, Shaq, Detlef Schrempf, Christian Laettner, Shareef Abdur-Rahim, and David Robinson.

How about at the other extreme? Looking only at players who are likely to be owned by more than just a few managers, three players stand out with standard deviations that are more than 50% of their averages (for both the season as well as for the last 20 games): Samaki Walker, Lorenzen Wright, and Rick Fox. Other guys worthy of mention in the high volatility category include Jason Kidd, Shawn Bradley, John Wallace, Lamond Murray, Jerry Stackhouse, and Tony Kukoc. If you hold these for only a few games, you really don't know what you're likely to get.

In closing, let's return to one of my opening questions, "Does this matter?" I suppose the best answer has to be, "It depends." If you're buying a player that you expect to own for a long time - maybe even all season - then daily volatility in point production doesn't matter at all, as long as the average works out to be what you want. But if you expect to own someone for a shorter duration, then it seems like it's at least worth considering. Sometimes, these more volatile players come with seemingly discounted prices (in fact, their point volatility might even be responsible for their relatively low prices... but that's an issue I don't want to deal with now). Let's suffice it to say that if you ignore a player's volatility and look solely at the averages, then you're at much greater risk of not getting what you think you paid for.

Return to Guru's Hoop Pointers index

Hoop Pointers is written by Dave Hall (a.k.a. the Guru), an avid fantasy sports player. He is not an employee of any of the fantasy games discussed within this site, and any opinions expressed are solely his own. Questions or comments are welcome, and should be emailed to Guru<davehall@home.com>.