Betting Market Power Rankings – Offense and Defense Rankings
by Michael Beuoy
The purpose of this post is to show how the Betting Market Power Rankings can be decomposed into Offensive and Defensive strength by looking at the over/under betting lines in conjunction with the point spreads.
One of the best features of the Advanced NFL Stats efficiency model is that it not only tells you who the best teams are, it also tells you why those teams are the best (passing efficiency, rushing success rate, penalty rate, etc.). Unfortunately, the betting markets don’t tell us why they favor one team over the other; all we get is the final point spread. However, by looking at the betting over/under for each game, and combining it with the point spread, we can at least get a sense as to whether it is offense or defense (or both) that is driving the market’s evaluation of each team.
First, a caveat. For the purpose of these rankings, Offense is defined strictly as “Points Scored per Game” and Defense as “Points Allowed per Game”. While this may align with the general public sentiment as to what makes a good offense or a good defense, the definition is not perfect. The points a team scores is not solely a product of offensive skill and/or the defensive skill of their opponent. It can also be affected by the following:
• Their defense – A defense can provide favorable field position to its offense by forcing short drives or turnovers.
• Special Teams play
• Pace – Not sure if this is much of a factor in the NFL (I know it is in basketball), but teams that play a faster paced game that runs less clock (no huddle, few running plays) will result in games with more scoring opportunities for both teams for a given 60 minutes of playing time.
The ANS efficiency model avoids these pitfalls by focusing on per-play statistics. But I have no choice but to work with per-game estimates.
Methodology
The methodology ended up being very similar to the original power ranking methodology. I just had to apply some simple algebra in order to arrive at a new target. I’ll take Week 16’s Arizona @ Cincinnati game as an example. Cincinnati was favored by 4.5 points and the over/under on total points was 40.5. So, if the difference of the scores was expected to be 4.5 and the sum of the scores was expected to be 40.5, then that means that the implied predicted score was Cincinnati 22.5 and Arizona 18 (simple algebra).
We know that average points scored in the NFL is around 22 points per team. So, the fact that the market expected Arizona to score only 18 points says either that Cincinnati’s defense is good, or that Arizona’s offense is bad (or a combination of both). Much in the same way that a high point spread tells you either the favored team is good or that the underdog is bad (or a combination of both).
Here is how I built the original model that was based solely on point spreads (GPF = Generic Points Favored):
Point Spread = Home Team GPF - Visiting Team GPF + 2.5
With the team GPFs determined such that they best predicted the point spreads.
The new model is built in a very similar fashion as follows:
Implied Home Score = League Average Scoring + Home Team Offense GPF (oGPF) – Away Team Defense GPF (dGPF) + 1.25
Implied Away Score = League Average Scoring + Away Team Offense GPF (oGPF) – Home Team Defense GPF (dGPF) - 1.25
Where I now determine two separate numbers for each team, an “oGPF” and a “dGPF”, which sum to the total team GPF (the League Average Scoring term works just like an intercept). Also note that I split the 2.5 point home field advantage evenly between offense and defense.
See below for an updated week 16 ranking that includes O RANK and D RANK (I continue to blatantly plagiarize the ANS format):
Rank | Team | GPF | GWP | O Rank | D Rank | ANS Rank |
1 | NE | 9 | 0.78 | 1 | 16 | 4 |
2 | GB | 8.5 | 0.77 | 3 | 9 | 5 |
3 | NO | 7.5 | 0.74 | 2 | 17 | 3 |
4 | SF | 5 | 0.68 | 13 | 1 | 13 |
5 | PHI | 5 | 0.67 | 4 | 13 | 6 |
6 | BAL | 4.5 | 0.66 | 11 | 3 | 9 |
7 | PIT | 4 | 0.64 | 14 | 2 | 2 |
8 | ATL | 4 | 0.64 | 8 | 8 | 12 |
9 | DAL | 3 | 0.60 | 5 | 21 | 7 |
10 | SD | 2.5 | 0.60 | 6 | 20 | 11 |
11 | NYJ | 2.5 | 0.59 | 12 | 7 | 16 |
12 | NYG | 1.5 | 0.55 | 7 | 25 | 8 |
13 | MIA | 1.5 | 0.55 | 18 | 5 | 18 |
14 | DET | 1 | 0.54 | 9 | 23 | 10 |
15 | SEA | 1 | 0.53 | 16 | 10 | 24 |
16 | TEX | 0 | 0.50 | 24 | 4 | 1 |
17 | CIN | 0 | 0.50 | 17 | 14 | 15 |
18 | WAS | -0.5 | 0.48 | 21 | 12 | 19 |
19 | DEN | -1 | 0.47 | 25 | 11 | 25 |
20 | CAR | -1 | 0.47 | 10 | 26 | 23 |
21 | TEN | -1.5 | 0.44 | 23 | 15 | 21 |
22 | ARZ | -2 | 0.43 | 22 | 19 | 22 |
23 | CHI | -2 | 0.42 | 27 | 6 | 17 |
24 | RAI | -3.5 | 0.38 | 15 | 29 | 14 |
25 | BUF | -4.5 | 0.35 | 20 | 28 | 20 |
26 | MIN | -4.5 | 0.35 | 19 | 30 | 32 |
27 | KC | -5 | 0.33 | 31 | 18 | 27 |
28 | CLE | -5.5 | 0.31 | 30 | 22 | 26 |
29 | TB | -7 | 0.27 | 26 | 31 | 30 |
30 | STL | -7 | 0.27 | 32 | 24 | 29 |
31 | JAC | -7.5 | 0.26 | 29 | 27 | 28 |
32 | IND | -8 | 0.24 | 28 | 32 | 31 |
A couple things to note:
• These should match the previous Week 16 rankings, but they don’t. I think there were line movements prior to Sunday which shuffled the rankings a bit.
• Both the top 3 offenses (NE, NO, GB) and the top 3 defenses (SF, PIT, BAL) pass the sniff test.
Here’s another view which shows the oGPF and the dGPF explicitly (the baseline League Average scoring is 22.0 points).
Rank | Team | GPF | oGPF | dGPF |
1 | NE | 9 | 8.5 | 0 |
2 | GB | 8.5 | 7.0 | 1.5 |
3 | NO | 7.5 | 7.5 | 0 |
4 | SF | 5 | 0.5 | 4.5 |
5 | PHI | 5 | 3.5 | 1 |
6 | BAL | 4.5 | 1.0 | 3.5 |
7 | PIT | 4 | 0.5 | 4 |
8 | ATL | 4 | 2.5 | 2 |
9 | DAL | 3 | 3.5 | -0.5 |
10 | SD | 2.5 | 3.0 | 0 |
11 | NYJ | 2.5 | 1.0 | 2 |
12 | NYG | 1.5 | 2.5 | -1.5 |
13 | MIA | 1.5 | -0.5 | 2 |
14 | DET | 1 | 2.5 | -1 |
15 | SEA | 1 | -0.5 | 1 |
16 | TEX | 0 | -2.0 | 2 |
17 | CIN | 0 | -0.5 | 0.5 |
18 | WAS | -0.5 | -2.0 | 1 |
19 | DEN | -1 | -2.0 | 1 |
20 | CAR | -1 | 1.5 | -2.5 |
21 | TEN | -1.5 | -2.0 | 0.5 |
22 | ARZ | -2 | -2.0 | 0 |
23 | CHI | -2 | -4.0 | 2 |
24 | RAI | -3.5 | 0.0 | -3 |
25 | BUF | -4.5 | -1.0 | -3 |
26 | MIN | -4.5 | -1.0 | -3.5 |
27 | KC | -5 | -5.0 | 0 |
28 | CLE | -5.5 | -4.5 | -1 |
29 | TB | -7 | -3.5 | -3.5 |
30 | STL | -7 | -6.0 | -1.5 |
31 | JAC | -7.5 | -4.5 | -3 |
32 | IND | -8 | -4.5 | -3.5 |
What jumped out at me most here was how much wider the variation was in the oGPF than in the dGPF. The standard deviation of the oGPF is 3.6 points, while it is 2.2 points for dGPF. In the comments on the ANS Week 17 Efficiency Rankings, there was a discussion with respect to the variability of offensive and defensive stats, with the conclusion that team defensive stats are more variable week to week because the defenses are at the mercy somewhat of what the offenses can do. I think the data above supports that conclusion, but I may not have thought it through entirely.
Also, there is a correlation between the oGPF and dGPF, which probably goes beyond bad teams just being bad on both sides of the ball. The correlation coefficient is 0.24 (it varies from 0.05 to 0.50 when looking at past seasons, but it’s always positive)
I’m hoping to do an updated ranking (including the oGPF and dGPF split) featuring just the 12 playoff teams and the latest lines, but I may not have time to get it out before the Wildcard round starts.
Also, you can use the chart above to build point spreads and over/unders for any matchup.
5 comments:
Michael,
Great work as always. Is there a way to determine GPF based on the ANS rankings or their underlying stats? It would be incredibly useful to compare the betting market GPF and the ANS GPF.
Thanks Arash.
Here's the GWP to GPF conversion:
GPF = -7 * ln(1/GWP - 1)
The write up was a bit rough this week due to other commitments. On the oGPF and dGPF split, they don't add up to GPF due to rounding, but that can be addressed in future rankings.
Big thanks to Ed Anthony for gussying up the tables. It's far beyond my (non-existent) HTML capabilities.
You can also compare this to the (backward-looking) Simple Ranking System. The two measures are very similar except that SRS works with the scores that actually happened, and your model works with the scores that were predicted. Both models generate an expected generic point spread, too.
If you subtract the predicted score (your metric) from SRS, you're left with a number that tells you the margin by which each team beat or missed their market predicted value.
This looks similar or identical to a points model i used last year to rank teams. It used excel solver to fit teams rankings by their weekly points scored and allowed. I noticed that when i broke out the offensive and defensive ranks, the spread was considerably tighter on defensive rankins implying that there was more to be gained over average by focusing on offense. I calcualted HFA to be about 2.3 points as well.
Post a Comment
Note: Only a member of this blog may post a comment.