Comments on: The 50 Biggest Playoff Upsets, 1991-2009 http://www.basketball-reference.com/blog/?p=4121 NBA & ABA Basketball Statistics & History Mon, 21 Nov 2011 20:56:04 +0000 hourly 1 https://wordpress.org/?v=4.6 By: Jake @ Jump Higher http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-14067 Thu, 14 Jan 2010 03:55:21 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-14067 Nice blog you have here. I agree with number 1 April 30, 2003 Mavericks vs Blazers, I didn't see it coming, lucky I didn't bet on that game.

]]>
By: DSMok1 http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13518 Mon, 07 Dec 2009 21:48:48 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13518 Oh... did you ever figure out how to calculate the standard error for the SPM for each player? I was utterly puzzled how Rosenbaum got those numbers. The statistics involved seems to be a bit beyond me. There was a thread over on APBR about that term; I don't know if there was ever a resolution.

]]>
By: DSMok1 http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13517 Mon, 07 Dec 2009 21:46:04 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13517 @ #5 Neil

I noticed that, also, when I ran some SPM numbers myself. Perhaps Rosenbaum's system was over-parametrized since he had relatively few data points. Did he run the regression over all 420 players with >250 minutes? He has results for 420 players. His notes stated that "The regression is weighted by minutes played with the 2003-04 season counting twice as much as the 2002-03 season." That adds even more error to the regression.

How exactly did you run your regression? Did you use 1-year APM's? Those are really noisy, but probably reasonable for this usage since the errors should even out. Since you have what, 6 years of data, the outliers should not over-influence the regression.

]]>
By: Neil Paine http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13514 Mon, 07 Dec 2009 20:03:33 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13514 Could some of these results be biased because the bad teams were not playing their stars the whole time (because they were winning) while the losing teams were leaving their starters in?

That's a good point, but unfortunately we don't have play-by-play and therefore can't say who was on the floor in the highest-"leverage" moments. That said, since these are playoff games, I think it's fairly safe to assume most teams played to maximize their point differential and didn't pull starters.

When I did the Statistical Plus/Minus for the NCAA last year, I summed to Ken Pomeroy's efficiency differential, which should be about the same as a SRS for college (right?)

Yes. In fact, SRS doesn't explicitly take into account pace like Kenpom's adjusted efficiency differential does, so I'd say his efficiency differential is the "more correct" metric to sum to (since SPM was regressed on efficiency differential and not per-game point diff.).

]]>
By: Neil Paine http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13513 Mon, 07 Dec 2009 19:58:30 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13513 I dropped them because they were statistically insignificant from zero. In fact, if you look at the updated regression, A/40 and DR/40 also have p-values that suggest their coefficients are nothing but a random fluctuation from a true coefficient of zero, but since Prof. Rosenbaum apparently found them significant in his original model, I decided to leave them in for now. I still don't see how he got an R^2 of almost 0.44, since the highest I can get using publicly-available APM estimates is 0.31, but I suppose that's another issue for another day.

]]>
By: DSMok1 http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13511 Mon, 07 Dec 2009 16:46:05 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13511 Nice work, Neil. Why again did you drop the other factors in SPM? I've been running a game-by-game SPM analysis on OU Hoops.com using the model with height and age. Did you drop them because they do not reflect actual court performance? That might be reasonable...

Could some of these results be biased because the bad teams were not playing their stars the whole time (because they were winning) while the losing teams were leaving their starters in?

When I did the Statistical Plus/Minus for the NCAA last year, I summed to Ken Pomeroy's efficiency differential, which should be about the same as a SRS for college (right?)

]]>
By: Jason J http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13487 Fri, 04 Dec 2009 18:12:29 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13487 That would be a very cool thing to see - the "overachievers" list.

]]>
By: Neil Paine http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13486 Fri, 04 Dec 2009 17:15:25 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13486 Thanks! Sure, I can extend the same method to calculating each team's probability of winning each series, and then see which team won more series than it "should have".

]]>
By: Raj http://www.basketball-reference.com/blog/?p=4121&cpage=1#comment-13485 Fri, 04 Dec 2009 16:47:53 +0000 http://www.basketball-reference.com/blog/?p=4121#comment-13485 Great post. Is there a way you can extend this to see which individual teams had the most unlikely playoff runs?

]]>