BBR Rankings: Schedule-Adjusted Offensive and Defensive Ratings (December 10, 2010)
Posted by Neil Paine on December 10, 2010
2010-11 NBA power rankings through the games played on December 9, 2010:
Rank | Prev | Team | W | L | WPct | Offense | Rk | Prv | Defense | Rk | Prv | Overall |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | Miami Heat | 15 | 8 | 0.652 | 4.12 | 4 | 4 | -5.02 | 5 | 6 | 9.14 |
2 | 3 | Boston Celtics | 18 | 4 | 0.818 | 2.68 | 8 | 9 | -6.11 | 1 | 3 | 8.79 |
3 | 2 | San Antonio Spurs | 18 | 3 | 0.857 | 5.48 | 3 | 3 | -3.09 | 9 | 9 | 8.58 |
4 | 5 | Dallas Mavericks | 18 | 4 | 0.818 | 1.32 | 12 | 11 | -5.06 | 4 | 4 | 6.38 |
5 | 7 | Los Angeles Lakers | 16 | 6 | 0.727 | 5.61 | 2 | 1 | -0.56 | 12 | 16 | 6.17 |
6 | 4 | Orlando Magic | 15 | 7 | 0.682 | -0.35 | 15 | 12 | -5.31 | 3 | 2 | 4.96 |
7 | 6 | Utah Jazz | 16 | 7 | 0.696 | 2.71 | 7 | 8 | -1.58 | 10 | 10 | 4.29 |
8 | 8 | New Orleans Hornets | 14 | 7 | 0.667 | -1.61 | 20 | 16 | -5.61 | 2 | 1 | 4.00 |
9 | 10 | Indiana Pacers | 10 | 10 | 0.500 | -1.24 | 17 | 18 | -4.03 | 8 | 8 | 2.79 |
10 | 9 | Denver Nuggets | 13 | 8 | 0.619 | 2.12 | 11 | 6 | -0.43 | 13 | 12 | 2.55 |
11 | 11 | Chicago Bulls | 12 | 8 | 0.600 | -2.18 | 22 | 22 | -4.35 | 7 | 7 | 2.18 |
12 | 12 | Atlanta Hawks | 15 | 8 | 0.652 | 2.21 | 10 | 5 | 0.05 | 16 | 17 | 2.15 |
13 | 13 | Oklahoma City Thunder | 15 | 8 | 0.652 | 2.75 | 6 | 7 | 2.13 | 19 | 19 | 0.62 |
14 | 14 | Portland Trail Blazers | 11 | 11 | 0.500 | -0.05 | 14 | 14 | -0.59 | 11 | 15 | 0.54 |
15 | 16 | Houston Rockets | 8 | 13 | 0.381 | 2.65 | 9 | 13 | 2.27 | 20 | 18 | 0.38 |
Rank | Prev | Team | W | L | WPct | Offense | Rk | Prv | Defense | Rk | Prv | Overall |
16 | 21 | New York Knickerbockers | 14 | 9 | 0.609 | 3.45 | 5 | 10 | 3.46 | 25 | 26 | 0.00 |
17 | 15 | Phoenix Suns | 11 | 11 | 0.500 | 5.68 | 1 | 2 | 5.75 | 30 | 30 | -0.06 |
18 | 22 | Philadelphia 76ers | 7 | 15 | 0.318 | -0.99 | 16 | 26 | -0.24 | 15 | 11 | -0.75 |
19 | 19 | Milwaukee Bucks | 8 | 13 | 0.381 | -5.95 | 30 | 30 | -4.96 | 6 | 5 | -0.99 |
20 | 18 | Memphis Grizzlies | 9 | 14 | 0.391 | -1.24 | 18 | 23 | 0.31 | 17 | 13 | -1.55 |
21 | 17 | Charlotte Bobcats | 8 | 13 | 0.381 | -2.06 | 21 | 17 | -0.26 | 14 | 14 | -1.80 |
22 | 20 | Toronto Raptors | 8 | 14 | 0.364 | 0.33 | 13 | 15 | 2.96 | 21 | 22 | -2.63 |
23 | 24 | Golden State Warriors | 8 | 14 | 0.364 | -1.24 | 19 | 19 | 3.48 | 26 | 27 | -4.72 |
24 | 23 | New Jersey Nets | 6 | 17 | 0.261 | -3.46 | 27 | 21 | 1.93 | 18 | 20 | -5.39 |
25 | 26 | Los Angeles Clippers | 5 | 18 | 0.217 | -2.75 | 23 | 20 | 3.05 | 23 | 29 | -5.80 |
26 | 29 | Minnesota Timberwolves | 5 | 17 | 0.227 | -3.22 | 26 | 29 | 3.03 | 22 | 23 | -6.25 |
27 | 27 | Detroit Pistons | 7 | 16 | 0.304 | -3.20 | 25 | 25 | 3.74 | 27 | 24 | -6.94 |
28 | 28 | Washington Wizards | 6 | 15 | 0.286 | -3.03 | 24 | 24 | 5.29 | 29 | 28 | -8.32 |
29 | 30 | Sacramento Kings | 5 | 15 | 0.250 | -5.59 | 29 | 28 | 3.20 | 24 | 25 | -8.79 |
30 | 25 | Cleveland Cavaliers | 7 | 15 | 0.318 | -4.76 | 28 | 27 | 4.17 | 28 | 21 | -8.93 |
HCA | 3.21 | |||||||||||
LgRtg | 107.57 |
To read more about the methodology and what these numbers mean, click here.
December 10th, 2010 at 11:05 am
Nice.
Would the numbers change much if you adjusted for rest days?
http://sonicscentral.com/apbrmetrics/viewtopic.php?p=32743#32743
December 10th, 2010 at 11:12 am
Wow, that's great stuff, DSM! I probably won't be able to factor that in every week (the way it is now, I have it set up to run quickly and post), but I might be able to re-run things in the middle of next week and see how (if) factoring in rest days changes the ratings.
December 10th, 2010 at 12:42 pm
Thanks Neil. Following DSM question and yesterday Justin's column about the Knicks, just one question: Do these rankings somehow take into account the "ease" of schedule?
I know they are based of efficiency differential, but I just can't decide if a tough schedule (where one could assume that the opponent efficiency would be higher) would automatically imply a lower ranking...?
December 10th, 2010 at 1:06 pm
You should read Doug's original post on SRS:
http://www.pro-football-reference.com/blog/?p=37
Basically, the ratings start with a team's efficiency differential. Then they adjust for strength of schedule by adding or subtracting based on how much above or below average their opponents' ratings were. If you're a +2 eff. diff. team facing a +2 schedule, your rating would be +4; if you're at +2 e.d. and faced a -2 schedule, your rating is 0.
The real trick is that the SOS is dependent on the ratings and the ratings are dependent on SOS, so it has to essentially run through many iterations before converging on the final set of ratings.
Anyway, to answer your question, your efficiency diff. gets credited for playing a tough schedule and debited for playing an easy schedule.
December 10th, 2010 at 1:45 pm
Lakers are too high... it's not a good sign when they're life-and-death to beat a couple of garbage teams who are among the worst in the league and are a combined 0-22 on the road.
Hopefully Bynum's return will change things.
December 10th, 2010 at 6:09 pm
Neil, I have to wonder about something. Is it possible to run the minimal squared error on a formula like:
Home OffRtg = (LgAvg + HCA)*HomeTeamOffensiveStrength/AwayTeamDefStrength
That is, will the minimization algorithm support this formula?
The reason I'm asking, if you use such formula you could sum a few such terms ((LgAvg1+HCA1)*HTOS1/ATDS1 + (LgAvg2+HCA2)*HTOS2/ATDS2), and such numbers might not be very close to 1. That is, it seems plausible that HTOSs in such formulas will "represent" different types of offence, and their value will be not only how well the team run specific types of offensive modes, but it's preferences. I thought it might be interesting to see such results, especially since you allow the distinction between "modes" to arise automatically, not making any assumption on what they might be (except their number). It might be pointless, but I don't know how to check.
What do you think?
December 10th, 2010 at 10:41 pm
I pointed this out a couple weeks ago when Milwaukee was 30th Offense and 1st Defense, but now in reverse, Phoenix is 1st Offense and 30th Defense. That seems quite odd to have two teams do that (one each way) in the same year.
December 11th, 2010 at 11:19 am
@ #7, Jared Ras
That's actually quite common for Phoenix: they run a high-risk, fast paced offense that scores them a bunch of point but also concedes a lot of turnovers and scores for the other team.
Milwaukee was similar last year, in reverse.
December 11th, 2010 at 11:41 am
Re: #6 - Interesting. That would produce values where 1 was average, and instead of adding the ratings to the league avg (as in SRS), the strength ratings would essentially be multipliers that you apply to the league avg rating. I'm not sure what you mean by "modes" and "preferences" of offense, though.
December 11th, 2010 at 11:46 am
OK, I ran that through the same data I ran the BBR rankings on (so, not including last night's action):
(You probably shouldn't split the lg constant and HCA out, because you're just applying the multiplier to their sum -- which happens to be the lg ORtg. It doesn't matter, though; half the avg ORtg multiplied by two obviously equals the avg Ortg.)
Average squared prediction error per game was 190.2; the BBR Rankings' avg squared error was 185.1. Still, it's an interesting concept, and definitely a different approach than the SRS.
December 11th, 2010 at 5:05 pm
Thanks Neil, I'll explain what I mean by "modes".
I came to think, regarding this general approach, that having offence/defence as a single number is somewhat problematic. Take OKC for example, they had a good defensive team last year, but with a real problem of defence in the paint, and had more trouble against teams with good offensive C/PF. A one dimensional approach to offence/defence will never reflect anything like that (one dimensional - a single number for the offence/defence). So I tried to think how can you add more dimensions. In your formula, you can't add more parameters because they'll all have exactly the same effect, and you won't get anything. But by putting stuff as multipliers, you can. As you did it (one dimensional) it's almost identical (multiplier-1 times 100 is additive value), because it's only second order deference, but you can add more dimension and they won't be identical.
So, you could do the same with a formula like (LgAvg1+HCA1)*HTOS1/ATDS1+(LgAvg2+HCA2)*HTOS2/ATDS2, and the numbers might diverge from 1 by more than they are now, as they will have additional meaning, except for the actual efficiency of the team (which is very similar across the league, of course), they could be related to the team's preferences.
It's completely hypothetical at the moment, of course, but I hope that by looking at teams with high type 1 offence and contrast them with teams with high type 2 offence, we could find some meaning to those "types", and I find it interesting because these types will come directly from the formula, not from our perceptions of basketball, which I tend not to trust, as long as they are not supported by numbers. (Too many ideas that make a lot of sense and are quite accepted by the public, like clutch or "hot hand" seems not to exist...)
Was I clear? I could try to explain again.
December 12th, 2010 at 11:29 am
Ah, now I see. So you'd basically be trying to tease out different components of a team's offensive rating based on how their performance changes against certain "types" of opponents (not knowing what the types are beforehand, but expecting them to emerge automatically when you run the formula).
I'd have to play around with the execution (and I'm not totally sure it would produce significant results), but that's worth taking a look at some point.
December 12th, 2010 at 7:10 pm
I'm not sure it would provide significant results either, but thought it might be interesting to see. One more remark on the subject, if you will tackle it. The number of dimension to use is quite arbitrary (2? 3? 5?) but I thought of some approach to establishing a reasonable number. You could run prediction tests (on last year data, for example) dividing to training and test data. Get minimum on training data, and see how well (in terms of average squared error) it predicts the test data. I would expect improvement with increase of dimension, but I speculate this improvement will be drastic as long as there are actual "aspects" of the game that will correlate with the new dimension, and will be quite small afterwards. Based on intuition alone, of course, so I wouldn't trust this idea, but if it's simple to test you could try and see what the graph looks like.
I would be delighted if you'll email me with anything regarding this idea, by the way. (I assume you can get access to the required email address).