by Jim Glass
If you haven't read Part I of this, you probably should to get the context and some details about the numbers being used here.
 Passing game changes, 1970s to 2000s. These numbers for passing statistic averages and standard deviations show how passing norms have changed since the 1970s.
("SD%" = the standard deviation divided by the average for the stat. For instance, for yards-per-completion the SD% of .128 for is the SD of 1.47 divided by the average of 11.53)
The bulk of the difference between eras results from rule changes. In 1978 the NFL adopted major rule changes to favor the passing game (see "The Top Ten Things that Changed the Game") and has steadily followed up on them since. The result has been shorter, safer, higher-percentage passing, and more of it.
The result of more, safer, less-aggressive passing is seen in the numbers for the average game.
 Average game passing:
With the norm for pre-1978 passing being 22% fewer attempts than today ... 33% fewer completions ... a meager 51% completion percentage ... and a TD-to-INT ratio of 15 to 19, we can see how so many fans of today may take even the middling passers of today as being better than the top performers of yesteryear.
But there have been more changes than just shifting averages.
 What wins, and what won.
Regression analysis relating team winning percentage to performance by standard deviation, on both offense and defense, via rushing yards/attempt, pass completion percentage, yards per completion, interception percentage, and sack percentage, gives these notable coefficients for passing offense...
In the 1970s, compared to completion percentage, yards per completion contributed twice as much to winning. Today in the 2000s they contribute nearly equally.
Now we are getting to why those who actually watched Unitas and Starr play voted Unitas All-Pro five times to Starr's once, while many who know only today's game think Starr's passing numbers look at lot better.
 Yards-per-attempt: less than it seems. Yards-per-completion: more!
Yards-per-attempt (in its many variations) has become the standard for measuring QB performance for those who don't use passer rating or a proprietary stat (such as WPA or DVOA). And only Y/A and passer rating are easily applied back though history.
But there is a problem with Y/A: it is a compound stat equaling completion percentage multiplied by yards-per-completion. It is accurate only to the extent its two components contribute equally to winning, and their relative values don't change over time.
Take two QBs with an identical 8.0 yards per attempt: Sam Slinger gets his by throwing for 16 yards per completion while completing 50% of his passes, while Percentage Pete completes two-thirds of his passes at 12 yards per completion. With all other passing numbers at league average, in the 2000s the 8.0 Y/A corresponds to about a 63% winning percentage for both QBs.
But in the 1970s, Slinger's numbers correspond to a 70% winning level of play versus 63% for Pete's. That 70% is 20 points over 50%, versus only 13 for the 63%, so Sam's play contributes 54% more to winning than Pete's (20/13 = 1.54) due to his throwing longer completions - and that 54% is a lot.
In the 1970s and earlier a strong-armed QB could win more games by reducing his completion percentage in the course of throwing longer –– an idea pretty much unheard of in the 2000s.
The low completion percentages of Unitas and Namath in their best years, held against them today, were not a "bug", they were a feature. High-yards-per-completion was worth much more than high completion percentage - and this is something that Y/A applied backward misses completely.
Unitas had MVP seasons in 1958 and 1959 with completion percentages of only 51.7% and 52.6% - well below his career average of 54.6%, and thus his "poor" years to modern eyes - but with top Y/C numbers of 14.8, and 15.0.
In 1964 Unitas and Starr had pretty much "mirror image" rate stats. Unitas' 51.8% completion percentage was well below average both for him and the league, only 9th of 14. Starr was first by completion percentage at 59.9% (and also first by today's passer rating). But Unitas was first by Y/C at 17.9, while Starr was 11th at 13.2.
The voters picked Unitas as NFL MVP. Applying the regression developed for 1971-5 (a bit of a fudge, but a lot better than using the passer rating of the 2000s) the results say the voters were right.
Unitas' mere 51.8% completion rate, good 2% interception rate, and top 17.9 Y/C produces play at a fantastic .910 winning level!
Starr's top 59.9% completion rate, superior 1.5% interception rate, and lowish 13.2 Y/C, produces play at .760 winning level ñ very good, but not even close to a match.
In the 21st century we don't get QBs repeatedly winning MVP awards while posting career-bottom completion percentages. That's the difference that yards-per-catch used to make.
For a bit more perspective, the career high Y/C numbers of today's top QBs don't reach even the career average Y/C numbers of the top QBs of the past.
|Career||average Y/C||Career||high Y/C|
That's how the passing game has changed, and it is no accident - it's happened because what wins the game has changed.
Yards-per-completion may be considered the great overlooked statistic of NFL football, certainly in historical analysis - the "killer stat" of the pre-1978 game, the single offensive stat that contributed by far the most to winning. After 1978, it has become steadily less so, but even during 2006-10 it has contributed a little more to winning than completion percentage.
 Passing rating: Poor enough for today, awful for historical evaluations.
If yards-per-attempt neglects the importance of yards-per-completion, the official NFL QB passer rating system is openly hostile to it.
A completion that loses yards can increase the QB's passing rating. Every completed pass that loses yards gets a rating of 79.2. So if a QB's rating is below 79, he can improve it by intentionally completing passes that lose yards.
On the 2010 passer rating list eight QBs were in this situation - a quarter of the league's starters. That's bad enough, but back in 1970 when the average rating was 62.5, fully 19 of the 24 rated QBs (79%) had ratings below 79 and could have improved their ratings by intentionally losing yards. (This is like a batter in baseball increasing his batting average by intentionally striking out.)
One can hardly think of a worse system to rate strong-armed QBs who help their teams most by throwing long passes for high Y/C while reducing their completion percentage.
Passing rating is always biased against long-throwers and in favor of short-throwers. This is not too bad when kept to the short-throwing era of today. But applying it backward in time produces ratings that are ever more distorted by (1) the increase in completion percentage over time, compounded by (2) the greater importance of long-throwing and drop of importance of completion percentage ñ even in then-current terms - to winning in earlier eras. It produces the totally absurd historical rankings noted in the first part of this story.
Applied back to the 1970s and 1960s it especially penalizes the deep throwers like Unitas and Namath, while giving a big boost to their short-throwing competitors such as Starr and Len Dawson.
Time to break again. Everything I write runs longer than expected.
Next: Joe Namath in the 2000s .... and the end.