clock menu more-arrow no yes

Filed under:

Examining Department of Player Safety Actions By Team

New, comments

Which team draws more DoPS actions? Which teams offend more than others? A deep dive into some data in advance of Corey Perry’s hearing.

2020 Bridgestone NHL Winter Classic - Nashville Predators v Dallas Stars Photo by John Russell/NHLI via Getty Images

The NHL’s disciplinary system exists in the shadows, and the league seems to be ultimately okay with that.

In an effort to shine a little light when it comes to Department of Player Safety (DoPS) action, I decided to look into all post-game discipline handed down since the start of the 2017-18 season. I’ll admit I went into this with a hypothesis – that the amount of perceived outrage about an incident played a large role in whether or not there was DoPS response. Since I couldn’t quantify public response to hits, particularly for some of the older incidents, I used tiered “team popularity” as a stand-in.

I came up with three tiers of teams – the Canadian/most popular/largest media markets (i.e., the teams that the NHL puts in every other marketing campaign and/or league marquee event such as the Winter Classic), the middle tier of teams with either large media markets or recent, repeated playoff success that leads to a certain increase in national media attention, and then the third tier of teams either in large markets but overshadowed by other teams/other sports or teams in small markets with a smaller media presence.

Here is the list I came up with. It does introduce a slight complication that the third tier has 9 teams to the other two tiers’ 11, but we’ll address that as it comes up.

This is clearly just one interpretation of the tiers, but I made them a year ago when I initially started this project, and the fact that it’s basically held up over the course of more than two seasons makes me more inclined to stick with it.

The general idea is to tier the amount of “outrage” over an incident by the number of voices that would be talking about it. Each team was assigned a number by tier (Tier 1 = 2, Tier 2 = 1, Tier 3 = 0), and I went through the DoPS Twitter account to record every disciplinary action from September 2017 to December 2019. Both teams, both players, the type of infraction and the result were recorded, then each team was assigned their popularity score of 0 to 2 as described above, and each game popularity total was calculated by the sum of each team’s score.

The raw data is here for your perusal. Excuse any typos, particularly on player names that may have been copied over at 2 a.m.

Overall, there were 109 total DoPS actions in the time frame in question. Of those, 52 resulted in a suspension and 57 resulted in a fine (somewhat tilted by this last month where DoPS has issued two suspensions to six fines). Prior to reductions on appeal, these actions cost players 157 games and $242,000 in fines.

My first analysis was on overall game popularity. My hypothesis was that the more eyeballs on a game (i.e. the more popular teams involved), the greater the outrage and the greater the chance of a DoPS reaction. This was mostly supported with a caveat I didn’t expect. It appears there is a baseline level of game attention that must be met to incur DoPS action, but once that level is met action does not increase as game popularity increases.

This chart is striking, but what it doesn’t quite illustrate well is how few DoPS action took place in games involving two teams from the third tier. There were just three total DoPS actions in these games (1 suspension and 2 fines). In contrast, games among the most popular teams drew 10 actions (7 suspensions and 3 fines). While there are fewer teams in Tier 3, it’s about 80 percent of the other two, and even my dyscalculic self knows three is far less than the eight it should be proportionally.

In other words, the evidence supports the theory DoPS does not take action on events from low-popularity games at the same rate as they do when mid-market or popular teams get involved.

But game popularity may not be the best way to gauge reaction. Only one set of voices needs to be loud enough to potentially embarrass the NHL, after all, so potentially a better way to look at it is by individual team tier. Ergo, the next step is to analyze the number of games that drew DoPS action that had at least one team from each tier involved.

Your first comment may be these numbers don’t add up with the 52 total suspensions and 57 total fines, and you’re right. Games can be double-counted in this analysis if they involve teams from more than one tier (i.e., a suspension from a game involving a Tier 2 and Tier 3 team is a mark in both the middle and low popularity columns). Again, what strikes me here is the disparity between the low tier and the top two. Low-popularity teams were involved in games resulting in 19 suspensions and 18 fines, well off the 26 and 29 respectively they should have been proportionally to the most popular teams or 30 and 36 respective to the middle-tier teams.

Again, this demonstrates that DoPS is significantly more likely to get involved when there is not a low-popularity team involved.

This begs the obvious question: does it make a difference if the more or less popular team is the offender or the victim? For the offender team at least, the answer is no.

This graph actually shows a huge disparity between the top and middle tier teams and the low tier ones. Proportionally, one would expect low-tier teams to have picked up disciplinary action 38 times while in reality they have nine suspensions and six fines. Now, you could argue that potentially those nine teams are the cleanest in the league, that Roman Polak and Brad Richardson and Zach Bogosian are all inherently less prone to suspensions than the rest. But do you really believe that? I know I don’t.

So what of the player who got hurt? Surely team popularity would help draw public ire, and therefore DoPS action, when that player is a Blackhawk, Penguin or Leaf, right?

Actually, no.

This is potentially the most surprising graph to me, in that the victimized team appears be evenly distributed in terms of fines but grossly weighted toward the middle-tier teams in terms of suspensions.

So what is driving that? What are teams that you apparently aren’t allowed to touch in that middle tier? The answer might surprise you…

Oh hi, Colorado Avalanche, with a little bit of Tampa Bay Lightning for good measure. Boston, Colorado and Tampa Bay combined to draw 28 percent of the suspensions in the league of the past 2.5 seasons. Tampa is also far and away the league-leader in drawing fines, with Pittsburgh a distant second.

The obvious corollary of that is are there similar outliers in the offenders – is there one team that’s driving up suspensions or fines in any one category that might question the whole analysis-by-tiers method?

The suspensions and fines are generally much more evenly distributed, with Ottawa, Winnipeg, St. Louis and Tampa being the big drivers here (though the Blues’ distribution of one suspension to six fines is amusing in a head-slapping sort of way to me). Carolina, Dallas and the Islanders are the darlings who never do anything wrong.

In fact, Dallas was the only team in the league to have neither taken nor drawn DoPS action since the start of the 2017-18 season through the end of the 2019 calendar year. It’s unclear DoPS knew the Stars existed until they played in the Winter Classic yesterday, and Corey Perry elbowed Ryan Ellis in the head. Of course, that’s the league’s marquee event and the only game on the schedule on New Year’s Day — there’s no bigger spotlight in the annual NHL schedule than that.

So what did we learn from all this? It’s relatively clear that the possibility of DoPS involvement goes significantly up once a game reaches a certain “eyeball count,” either through the broadcast or on social media. Games between the nine teams in the lowest tier in popularity are far, far less likely to draw intervention.

What this analysis cannot account for is individual action within a game. There is a chance, though I would argue with the sample size of 2.5 seasons a remote one, that perhaps the less popular teams are simply cleaner, that their games don’t devolve on a Tom Wilson or Radko Gudas discretion as often as the Capitals or Flyers.

However, this is a sample of more than 3,000 games, and given the NHL’s past issue of pandering to league-front office fathers and mainstream media critique, I think the data argue there is something more significant at play here. Perhaps that’s to the benefit of teams like the Stars. After all, I’m sure some Arizona Coyotes fans would argue there should have been a suspension this past Sunday when they took on the Stars, and not to Taylor Hall for his kneeing. Perhaps flying under the radar is a good place to be sometimes.

But from a player protection standpoint, the players on lower-tier teams deserve the same scrutiny as those on Boston, Tampa Bay and Colorado. And as the data show, the NHL isn’t distributing discipline evenly across its teams at this point.

Erin Bolen contributed the data tracking and write-up while Taylor Baird was responsible for the data visualizations contained herein.