2024-25 Top Ten Players

PDF Version

Given that Productivity Rating (PR) has two defensemen as the top two players in the league and that some of the players in the top ten were a bit of a surprise, and given that a change in the way PR is calculated was introduced days before the end of the season, it is appropriate to take a closer look at the top players in the league and the method of their selection.

Objectively, I have no issue with the top ten. The formulas are what they are, the numbers are what they are, and no amount of re-running the process will produce different results. Subjectively, I was surprised at some of the names at the top.

Why Do I Even Use Formulas?

There are several reasons I use formulas to rate player performance in a season.

Most importantly, I have no experience at any level of competitive hockey, so my personal opinion as to who is the best player is completely meaningless.

I do not watch every game that is played, nor do I read every game’s press coverage, nor do I read every game’s box score. Doing those things would make my personal opinion more meaningful, but I have no interest in spending that much time on those hockey “tasks.” It is far more important that I spend time with family and friends, that I play sports like golf and curling, that I play card games, and that I cook.

I like hockey, and I like numbers, and I was a computer programmer at Statistics Canada during my working life. On my retirement, I needed to keep my brain engaged, and I was confident I could use hockey statistics to evaluate hockey players.

Through NaturalStatTrick.com, the statistics from every team from every game for every player are freely available at the season level. God bless Natural Stat Trick.

I use formulas to evaluate NHL players because the results of the formulas are much better than my personal evaluations could ever hope to be. Every player is evaluated based on a selected set of statistics. Players who do a lot in a season get a high PR-Score, and players who don’t do much get a low PR-Score. I have 18 seasons’ worth of data, and I am confident that the PR process does a very good job of categorizing players.

Is PR perfect? Hell no. But I do not let an unachievable “perfect” get in the way of an existing “very good.”

It is now time to take a look at this season’s top ten players, as determined by their PR-Scores (from Method 2).

2024-25 Top Ten Players

Every time I do a season’s ratings and look at the top players, there are surprises. The “normal” surprise is a player who is in the top ten who plays on a team I don’t follow. Sometimes the surprise is a player who I thought would be in the top ten, but he didn’t make it. And, sometimes, the surprise is a player with whom you are familiar and never expected to see him in the top ten in the league.

This year’s list has all of those surprises. Kyle Connor and Moritz Seider are players whom I didn’t see much of this season. Connor McDavid and Evan Bouchard are missing. Perhaps most surprising of all is seeing Jake Sanderson in the top ten. And I’m saying that even though I’m a fan of the Senators.

So … there were a bunch of surprises in the top ten, and PR was calculated using a new method. Is the new method doing something strange?

Investigating PR Methods

Going into this year’s calculations, I thought the differences between Method 1 and Method 2 would be slight. I was fully expecting that a few players who would have been in the top ten in Method 1 would get shuffled out in Method 2.

There were two planned differences between Method 1 and Method 2. Firstly, Method 2 values goals and assists 5% more than Method 1. Secondly, Method 2 added a component that better rewarded players on their defensive contribution, replacing a Method 1 component that was supposed to reward defence but actually favoured offence.

The new component had the biggest potential impact on highly rated players, as almost all highly rated players are good offensive players. The next table shows this season’s top ten players with their Method 1 PR-Scores added.

Both Method 1 and Method 2 had the top seven players in the same order. It is safe to say that those seven players are the cream of the crop.

The PR-Scores of the top seven players are all lower in Method 2 than in Method 1. For Nikita Kucherov and Mitch Marner the drop is slight: 0.04 and 0.03 PR-Points. For Nathan MacKinnon the drop is huge: 0.91 PR-Points.

Evan Bouchard (EDM, D), Nick Suzuki (MTL, F) and Connor McDavid (EDM, F) were in positions 8, 9 and 10 in Method 1. Suzuki is 14th in Method 2, a small change that most fans not living in Montreal would see as correct. Suzuki is a very good player, but he probably isn’t a “top ten in the league” player.

McDavid is 24th in Method 2 and Bouchard is 25th: the removed component greatly favoured those two players, and the new component doesn’t do them any favours. Let’s look at this new component a little closer while considering the fate of three players: McDavid, Bouchard and Kyle Connor.

PR Method 2 New Component

I must apologize for the next table, as it is 30 rows long. In my defense, seeing the Method 1 and Method 2 data for all of the top 30 players really helps one understand why McDavid, Bouchard and Connor moved around in the rankings so much.

The new component in Method 2 is “defensive zone balanced expected goals against”: DZB.xGA. The removed component from Method 1 was “Team-relative Corsi” (TRC).

 table shows each player’s rank under the two methods (M1 Rank, M2 Rank), each player’s PR-Points for goals and assists (G, A) and each player’s PR-Points for TRC (M1) and DZB.xGA (M2). The final column shows the difference in total rewards for those categories.

Negative adjustments to PR-Score are shown in red-coloured font, and you will notice that almost every player has a negative adjustment. The two-fold reason for that is that M2 does a better job of rewarding defensive play than did M1, and most highly rated players are not strong defensive players.

Eleven of the top 30 players have a positive value for DZB.xGA, while about 55% of the league as a whole has a positive value for it. In TRC, 29 of the top 30 players had a positive score, and the size of TRC tends to be bigger than the size of DZB.xGA. The replacement of a larger positive component with a component that has a smaller score resulted in most of the top players from Method 1 getting a lower PR-Score n Method 2. 

The PR-Scores of Bouchard and McDavid dropped significantly in Method 2: -1.32 and -1.06 PR-Points, respectively. They had a lot of offensive zone faceoffs and consequently, their Corsi data was much better than other Oiler players who were forced to play more frequently when Edmonton had defensive zone faceoffs. Their Method-2 drop reflects that they are not great defensively: their expected goals against data is higher than average based on the frequency of their defensive zone faceoffs.

Kyle Connor shot up because of the increase in PR-Points for scoring and an effective increase in the change between TRC and DZB.xGA. Connor had poor TRC data, which is a bit of a surprise.

Kyle Connor and TRC and DZB.xGA

Team-relative Corsi (TRC) compares a player against his teammates. TRC rewards players who get a lot of offensive zone faceoffs.

Defensive-zone-balanced expected goals against (DZB.xGA) compares a player against the league, rewarding a player who has good xGA data with respect to the frequency of his defensive zone faceoffs.

For unknown reasons (I’m neither a strategist nor an analyst), Winnipeg’s Corsi data was worse when Kyle Connor was on the ice than when he was on the bench, giving him a negative team-relative Corsi (TRC). The same is true for his teammate Mark Scheifele.

Both Connor and Scheifele also have negative DZB.xGA scores, so they aren’t setting the world on fire defensively. The difference between them and the other top players is that their TRCor score was more negative than their DZB.xGA score, so using DZB.xGA increased their PR-Score. Most of the top players had a large positive TRC score, so replacing it with a smaller (and sometimes negative) DZB.xGA score resulted in a decreased PR-Score for them. 

Due to the change of methods, Connor moved from 29th (Method 1) to 9th (Method 2). Scheifele moved from 45th to 15th. Let me stress that they are the same players they always were. They rank higher in Method 2 because it does a better job of rating players.

Sanderson vs. Other Top Defensemen

As mentioned, several times now, I was quite surprised to see Sanderson in the top ten, especially as several defensemen who outscored him were ranked below him. Let’s start out by looking at he rank and PR-Score of defensemen who were in the top 25 in the league and who were rated below Sanderson (Makar and Werenski are better than him).

Sanderson out-scored Seider, Sergachev and Weegar, so they’ll be dropped from further investigation. Below is the scoring information for the five remaining players.

The question before us is: how did Sanderson get a higher rating than the defensemen who out-scored him? The answer is: Productivity Rating is a composite statistic, so Sanderson must have done better in other components that make up PR. As the table below inelegantly shows, that is exactly what happened. Various components were grouped into one of four categories.

Scoring is a grouping of goals and assists.

Time includes all time-on-ice, the time-on-ice bonus for skaters, penalty-kill time-on-ice and the defenseman’s bonus (which is based on time-on-ice).

BTGH includes blocks, takeaways, giveaways and hits.

xGADZ refers to “defensive zone balanced expected goals against (zGA) and ‘defensive zone starts’ (DZ).

All four defensemen had more PR-Points than Sanderson from scoring. In all other groups of components, Sanderson had more PR-Points than each of the other defensemen.

I think a picture will clear things up. The following bar chart shows the differences between each defenseman had Sanderson. They all had an advantage in scoring (first group of columns), and they all had lower scores in each of the other three areas.

Sanderson is represented by the thick line going across the chart from left to right, while the columns show the difference between Sanderson’s score in a category and the other defensemen’s scores.

I want to stress that Sanderson’s advantage over the other defensemen is not just his time-on-ice. A detractor of these results could say that Sanderson gets more ice time than he would on another team because he is so much better than Ottawa’s other defensemen. Who else was Coach Green going to put on the ice?

That same argument could be used against the other defensemen when it comes to the scoring data. Morrissey plays with Kyle Connor and Mark Scheifele; Hedman plays with Kucherov and Hagel; Dahlin plays with Alex Tuch and Tage Thompson; Bouchard plays with McDavid and Draisaitl.

All statistics reflect a player’s skills, his role on the team, his usual on-ice partners, and the opponents he usually faces.

An Unintended Change in PR Calculations

It turns out there was one more difference between the two PR methods. It was completely unintended and fairly slight in size, but it must be discussed, nonetheless. Did you notice that Mitch Marner is +0.17 PR-Points in Method-2 after the two intended changes, but his PR-Score changed by -0.03 PR-Points? Where did that other -0.20 PR-Points come from?

Both Method 1 and Method 2 reward a player for the number of defensive zone faceoffs he was involved in during the regular season. It was intended to reward players who are assigned defensive duties by their coaches, who put them on the ice when there is a faceoff in their defensive zone.

The formula in M1 and M2 is identical. What is different is the source of the faceoff data. In Method 1, face-off data from all manpower situations were used. In Method 2, only faceoff data came from 5v5 manpower situations.

To calculate the new DZB.xGA component, I needed faceoff zone counts from 5v5. With as much thought as goes into picking my socks each morning, I decided to use the 5v5 faceoff counts for the defensive zone start (DZS) component. I probably should have used as much thought as goes into picking my shirt.

In M1, players who are used heavily on penalty kills would get a better DZS score than other players. In M2, players who are used heavily on penalty kills do not get a better DZS score than other players, unless they are also heavily used in the defensive zone at 5v5.

Marner gets a lot of defensive zone faceoffs on the penalty kill, which are no longer counted. He also gets a lot of offensive zone faceoffs on the powerplay, which are also no longer counted.

In M1, Marner had 692 defensive zone faceoffs out of 1,849 total faceoffs, giving him +0.37 PR-Points. In M2, Marner had 385 defensive zone faceoffs out of 1,138 total faceoffs, giving him +0.16 PR-Points.

Please note that all the component values in this section are shown with two digits of precision (0.37) rather than four digits of precision (0.3676). Showing two digits of precision makes it look like Marner lost 0.20 PR-Points for defensive zone faceoffs: he lost 0.2068 PR-Points.

Summary

This article turned out to be more of an investigation of the Method 2 formulas used for Productivity Rating than it was an investigation of the top ten players. That the top seven players would have been the top seven in both methods, and even would have been in the same order, is a very strong indication that both methods properly identify the top players.

What was interesting to me was the differences between the lists. Sanderson and Seider moved into the top ten, but they would have been in the top fifteen under Method 1, so this is actually a small change.

The big surprise came from Kyle Connor. His TRCor score in Method 1 was unexpectedly low for a high-scoring forward, which dropped him in the Method 1 rankings. While his DZB.xGA score is low in Method 2, it is a better score than his TRCor in Method 1, and his PR-Score in Method 2 increased as a result. Most other players at the top of the rankings had lower PR-Scores in Method 2, as their good TRCor scores were replaced by lower DZB.xGA scores.

Related Articles

Productivity Rating Method 2

2024-25 Regular Season Awards

PDF Version

Top of Page