Awards in Professional Basketball Don't Make Sense
Among the fanbases of each major sport, discussion of awards is a routinely infuriating exercise. We all argue about what each award means even after understanding that we disagree on the premise, proceed to disagree about our choices for who should win, and end on a doomed discussion about the significance of the word 'valuable'. Awards aren't just an effort in vanity or ego – they impact things like legacy, player salary, and consequently where players might end up playing. It's important to get these things right, and more importantly establish the rules by which we all play when making these selections.
Consider the curious cases of Paul George and Gordon Hayward. Both would have been eligible for the 'supermax' contract two years ago if they had made an All-NBA team, a contract that raises their maximum earnings by $30 million over five years. Both players left their teams (going from the Pacers to the Thunder in a trade and the Jazz to the Celtics as a free agent, respectively) that very offseason after missing out on an All-NBA spot. Following his injury and failure to return to form (so far), Hayward might be a very expensive drag on Boston's ability to sign good players in subsequent free agencies. And following a resurgence from a similarly devastating injury, George has kept the Thunder relevant and potentially helped secure fellow star Russell Westbrook's stay with the team. You can trace the potential ripple effects beyond just the four teams that were directly impacted too – if George re-upped with the Pacers, the Lakers might not have held out hope to sign him and instead directed their efforts at landing another star player. Perhaps the Jazz would be a perennially upper-echelon team anchored by a healthy Hayward, Donovan Mitchell, and Rudy Gobert.
Most major sports have awards that acknowledge the same things, even if the awards themselves have different names. All major sports have recognition for the most valuable player (MVP), best defensive player (Defensive Player of the Year in basketball and football, Best Defenseman in hockey, and Best Overall Defensive Player in baseball), and a set of players who were the best at offense and defense at each position (All-NBA in basketball, All-Pro in football, and Gold Glove and Silver Slugger in baseball), best coach/manager, and best executive. There are some 'lesser' awards that vary due to the nature of the sport – most improved, comeback player, best kiss, etc. But the MVP is the crowning achievement, the indication that you're the best in the world at what you do. And in basketball, the MVP and All-NBA awardees (which are effectively the top-3 MVPs at every position) are the subject of the plenty of debate.
It's not true in all sports. This century, only three of the 18 NFL MVPs have not been quarterbacks, all three exceptions playing running back. There are just some positions which will never be in contention for MVP in the modern game – kickers, defensive players, and special teams players don't have nearly as much impact on the game. You have to go all the way back to 1986 to get a non-RB/QB, when linebacker Lawrence Taylor won the award. Baseball MVPs are split by conference (which makes things annoyingly confusing) but always goes to the best overall hitter (which is easily quantifiable), who can be at almost any position but tends to be outfielders or corner infielders. There's a smattering of pitchers – Clayton Kershaw in 2014, Justin Verlander in 2011, and then all the way back to Dennis Eckersley in 1992 – who are at somewhat of a disadvantage because they have their own award, the Cy Young. In hockey, forwards win it 77% of the time – this century has seen only one goalie and no defensemen.
But basketball is more of a pure sport in the sense that it truly lacks positions. Each player on the court has the same powers and therefore theoretically equal potential to impact the game. Most importantly from a decision-making standpoint, all basketball players have the same statistics to look at – pitchers have ERA while batters have OBP, soccer forwards have shots on goal while keepers have saves, and so on. Every basketball player gets points, rebounds, assists, etc. And consequently, it should be easiest to compare basketball players, because you don't have to come up with a subjective translation factor to account for difference in position. So why do we argue so much about things like who the MVP should be? It should be as seamless as it is in baseball, where there is usually little debate about who should win, except in years when there's a transcendent pitcher.
So what is the MVP award really? There are a few interpretations:
Best player on best team
Best player among teams that made the playoffs
Best offensive player
Best all-around player regardless of record
Best player in the league regardless of performance
The fifth definition is obviously ridiculous: you can say that LeBron James was the best player in the league over the past several seasons, but that's a recognition of his absolute ceiling he could get to if he really tried, his 'playoff mode' if you will. But this is a regular season award based on regular season output. It should reward who actually did the best, not who could have done the best if they were properly motivated.
In recent years, the second definition seems to be the one that aligns most with the actual voting patterns. In the last 30 years, Westbrook (2017), Kevin Durant (2014), LeBron James (2012), Kobe Bryant (2008), Steve Nash (2006), Kevin Garnett (2004), Tim Duncan (2002), Allen Iverson (2001), Karl Malone (1999), Michael Jordan (1998, 1991), Hakeem Olajuwon (1994), and Magic Johnson (1989) all led their teams to the playoffs but were not on the best regular season team (13 out of 30 seasons). The remainder – James Harden (2018), Stephen Curry (2016, 2015), James (2013, 2010, 2009), Derrick Rose (2011), Dirk Nowitzki (2007), Nash (2005), Duncan (2003), Shaquille O'Neal (2000), Malone (1997), Jordan (1996, 1992), David Robinson (1995), Charles Barkley (1993), and Johnson (1990) were all MVPs and led the league in wins as well. Westbrook is probably the bar in terms of team record – the Thunder's 47 wins are an outlier when it comes to MVPs.
This MVP definition also extends to All-NBA teams, where 15 total players are selected with some positional restrictions (e.g. you have to have three centers, even if centers are the shallowest position in the league in a given year). This year, only two of the fifteen (James and Kemba Walker) did not make the playoffs. In this decade, non-playoff All-NBA players have been rare, comprising only Anthony Davis (2017, first team), DeMarcus Cousins (2016 and 2015 second teams), and Kevin Love (2014 and 2012 second teams).
Interestingly, none of the other awards seem to have much ambiguity in their definition. Defensive Player of the Year is just the best defensive player, though they do tend to come from 50-win teams. Whether this is coincidence isn't clear, because there's usually not that much debate – the last close race was Tyson Chandler beating out Serge Ibaka in the lockout-shortened 2012 season, and before that there were some not-even-THAT-close standoffs between Ben Wallace and Bruce Bowen in the mid-aughts. Sixth Man of the Year tends to be a somewhat closer race, but the right player always wins – see Eric Gordon and Jamal Crawford both over Andre Iguodala in back-to-back years.
But should the alternate standard for MVPs be the case? After all, there's a reward for players whose teams do well: the playoffs! It seems asymmetrical to judge a player on the performance of his team, when there are several other players (as well as external factors such as the coach) that can affect that outcome. And if you do decide one way or another, it should be absolute, without exception. You can't say that you're *only* going to let in playoff players, but then give the Cousins and Love-types passes. And you can't say that you're going to let in all players, but then hesitate when Devin Booker inevitably becomes a 30-point scorer but the Suns continue to be ass. It's rare that a player is so exceptional that they enter the conversation of best-players-in-the-league, but such efforts don't translate to team success – but it does happen.
The problem is that it's impossible to extricate a player and judge them outside of a team context. You can't replace James Harden with an 'average' player because the Rockets' system is uniquely suited to his talents. You also can't measure just how 'difficult' many of his shots were, just like you can't attribute Curry getting open to solely his own inventiveness or the genius of the team's offensive scheme. You can't even somehow incorporate usage rate, because that neglects the impact that a player has by just being on the floor – e.g. Curry always demanding the respect of a defender even if he's ice-cold and nowhere near the play. You can't use win expectancy as a metric either, because that's largely dependent on the previous season. Following up a 44-win season, Milwaukee was projected to have around 47 wins, but ended up around 60. Coming off a 65-win season, the Rockets were poised for some regression back to 56, and they ended up pretty close at 53. You could say that the Bucks overperformed expectations and the Rockets failed to meet them, but that ignores that the Rockets have been consistently great for a few years running. And is it really fair to penalize a great player for having great players around him? In 2015, you could say that Curry elevated the players around him, helping Klay Thompson become a first-time All-Star, with Draymond Green following suit the year after. But now, those players have been good for long enough that it's not entirely accurate to credit Curry for their success. We can intrinsically acknowledge that Green wouldn't be the same player if he wasn't playing alongside Curry, but we can't truly project how different he would be unless (or until) we see that.
Quick note: For some reason, no one ever seems to know how to evaluate a difference in games, and how many games should be required to be eligible for the award. The answer (to me, at least) is fairly obvious – just scale a player's stats by the number of games played (which is a fancy way of just looking at total stats, but per-game stats are more inherently palatable). In the hotly-debated 2017 rookie of the year race between the 10 point scorer Malcolm Brogdon and the 20 point sensation Joel Embiid (who only played 31 games), such a scaling would have left Embiid's averages at a paltry 9 points a game. Funnily enough, it should have been Embiid's teammate Dario Saric who should have won that year.
If you're trying to compare two players, one of whom averaged 20 points but only played 41 games, and another who averaged 15 points and played in every game, you can cut the first players points in half to account for the fact that he only played in half the game. Duh.
All of this is without even considering defense, where the analysis can get murkier. Not only are traditional statistics like rebounds and steals poor indicators of defense (Harden was second in steals this year), but the advanced statistics are notoriously easy to cherrypick. Besides, the MVP has traditionally been code for 'offensive player of the year', which is why James Harden is even in contention for (and might win) this year's MVP over Giannis Antetokounmpo. However much Harden's offensive game is better than Giannis', the latter's defense trumps the former to the point that Giannis is first-team All-Defense! As the only player to make both first-team All-NBA and first-team All-Defense, Giannis is the logical candidate for MVP if it truly is an all-around award, but the offensive skew makes Harden annoyingly relevant.
This is where baseball, despite the positional inefficiency, benefits from the fact that you can isolate each player's performance on both offense and defense. You know exactly how good each pitcher is by a number of metrics, and so you can quantify how significant it would be to get a hit or a home run against said pitcher. You can easily track how many defensive errors a player makes, and how many plays they are actually involved in. Largely, baseball is an individual sport where the individuals take turns playing, but basketball always involves all five players on the court. Sure, you could try to use isolation stats, like the fact that Harden had over 300 unassisted three-pointers made this season (and is the all-time leader with over 1000 in his career). That number means something, but it's hard to quantify exactly what. Does it account for whether Harden got a favorable matchup switch, or whether he got a good screen, or whether he travelled to get off that shot? You could even try to use five-man or four-man lineup data, but that doesn't account for the opposing lineup and has major problems with sample size.
Given all the complexity involved, it seems like the answer should be the player who just had the best statistics. It seems like a result obtained through futility, but it's the most equitable way. Or, create an offensive player of the year that's judged solely by statistics (and not team performance) and rename the MVP to the Best All-Around Player (so we can stop having stupid semantic debates). This would have interesting results – 2016 would have given us Kawhi Leonard as the unquestioned MVP instead of Russell Westbrook, 2014 could have given us Chris Paul (the only one to make All-NBA First Team and All-Defense First Team), and 2007 would have been a battle between Kobe and Duncan instead of Dirk.
Just anything besides the way it is now.