If you’re a Dallas Stars fan, you’ve likely been hearing a lot of dirty words lately. CEO Jim Lites’ infamous tirade might as well have been an ad for Urban Dictionary. Head coach Jim Montgomery used the dreaded M-word about a franchise he’s now a big part of. For my part, I’d like to talk about another dirty word and its synonyms.
Analytics. Fancy stats. Numbers. Metrics. Statistics. Figures.
It’s impossible to talk analytics without running into a bunch of verbal obstacles. “Traditionalists” tend to hate them. Sometimes they’ll politely ask you to throw them out the window. Sometimes they’ll run the classic either-or fallacy of arguing that statistics shouldn’t replace tape — as if anyone wanting to understand the game would deliberately ignore extra ways to categorize information just to focus harder on Excel. Players kind of hate them. Unless it makes them look good. Or, on the rare occasion, they completely understand their significance. People hate them.
It’s understandable. When people think of numbers, they think of being reduced. How many of us failed a chemistry test, but not the class? How many of us couldn’t afford rent, but weren’t evicted? A single number assessing broad value doesn’t feel right. That’s probably why Pierre McGuire went on an unhinged rant about the “failure of analytics.”
True, there are some unsuccessful teams connected to analytics. The Arizona Coyotes GM John Chayka got his start with an analytics company called Stathletes. Several years ago, the Edmonton Oilers hired Tyler Dellow. But there’s also the Washington Capitals, who took in Tim Barnes, the “Irreverent Oiler Fan” and blogger who coined the term “Corsi” in 2008. The Pittsburgh Penguins brought over Sam Ventura, who founded the essential War on Ice — the first stats site to incorporate shot maps. The Tampa Bay Lightning are intimately linked with Michael Peterson, who has worked closely with the team using analytics for 10 years. So far I count two recent Stanley Cup champs, and the best team in hockey right now.
In the middle of all this, the NHL is getting ready to stop tracking the 350 events that occur each game, and begin tracking 10,000 events per game thanks to a fancy new chip.
We’re primed to have specific information about broad talents — like how fast players are or how hard they hit — and specific information about nuanced talents, such as measuring a defensemen’s gap control, and whether a center who’s good at face-offs relies on wingers who are good at retrieving the puck for him.
But does more information mean good information?
What Stats Are Really About
Bias. Bias. Bias. BIAS.
“Fancy stats” have never been about just numbers. Nobody uses Corsi — the stat connecting players to how often (or not often) a shot attempt occurs when they’re on the ice — to tell you who is good, and who is bad. Miro Heiskanen has been on the ice for 781 shot attempts for, and 881 shot attempts against. That’s a differential of 46.99 percent. It’s incredibly below average. Does that mean that Heiskanen is a bad possession player? Of course not. Numbers aren’t a question of broad value. They’re signals for organizing how we value. It should provoke the following question:
How can we organize the data we have so that a pattern emerges to help us learn more — apply what we learn — and/or prevent illusions of what those patterns mean?
Illusions aren’t just a hockey problem. They’re a human problem. The easier we can visualize something, the easier that visual can mistakenly be interpreted as evidence of its frequency. Surveys show that people think homicides cause more deaths than diabetes, that tornadoes are deadlier than lightning, and that car accidents kill more than abdominal cancer — despite the fact that the opposites are invariably true. If we accept that our brains are more responsible for vision than our eyes, then we have to accept that blindness can be flawed beyond vision, and within the very soul of our perception. The “eye test” — a phrase that sounds suspiciously like an unwillingness to do homework — is a bad way to evaluate players. Our minds clumsily shift between voluntary and involuntary systems of thinking. We are so clumsy at this that we’ve been surpassed by the goldfish when it comes to processing new bits of information at a time. The idea that we can evaluate all the events that happen and don’t happen within a given experience is nonsense.
But what does bias mean in the context of hockey? Why is this relevant at all?
Because numbers are already a big part of understanding hockey. Wins, points, goals, assists, etc. This is where fancy stats come to the rescue, to try to eliminate the bias that more heavily affects “traditional” numbers.
Just look at the 2013-14 Colorado Avalanche for example. Led by Patrick Roy, they had a bunch of wins. But those wins did not represent the on-ice effectiveness of the team from game to game. That year they were in the bottom-10 at generating shot attempts. However, they were fourth in even-strength goal scoring. Colorado’s success was biased by an unbelievable shooting percentage of 10.24 percent. As Travis Yost noted at the time, “a team’s even-strength shooting percentage over one year tells us absolutely nothing about how that team will shoot the following year.” The Avalanche predictably crumbled the following year because generating shots is a repeatable skill that corresponds with winning while high shooting percentages are not.
Consider point production. Last season, Devin Shore was seventh in team scoring for the Dallas Stars with 32 points. Does that mean he was the seventh best offensive player? Of course not. His ice time relative to his teammates, and special teams play “biased” his point totals. If we were to break his numbers down into rates, we’d see that Shore produced 1.14 points per hour at even strength, good for 12th on the team.
Esa Lindell is another good example. He’s currently top 20 in defensemen goal-scoring. Does that make him one of Dallas’ best offensive weapons from the blue line? Five of his seven goals are on the power play, a power play he’s shooting 20 percent on. If we break his production down into primary points (tallied by goals and primary assists), he ranks 72nd in scoring among blueliners with at least 700 minutes of TOI. Why break the numbers down this way? To eliminate bias. Goals and primary assists are more repeatable. So Lindell’s primary point totals better reflect his capacity for production rather than his operating production.
That’s an important distinction because Lindell is due for a raise at the end of the season. It’s an important distinction because if Dallas had done a better job of evaluating capacity for production rather than operating production, perhaps they would have traded Shore sooner for a potentially better return. Perhaps they would have realized they didn’t need Martin Hanzal, Blake Comeau, Lauri Korpikoski, Jiri Hudler, Adam Cracknell, et al. Perhaps Dallas would be invested in chasing their vision instead of trading three picks for three third-pairing defensemen (if we’re being technical, that’s three healthy scratch defensemen if Marc Methot and Stephen Johns were healthy).
Coaches Love Stats — As Long As They’re In Control Of The Inputs
The irony in these debates about fancy stats is that coaches and GMs have to do their homework. And that makes sense. When you evaluate players, you have to do more than tell them what you saw. You have to show them what you think you know. That homework requires “fancy stats.”
Jim Montgomery uses an outside company to track zone time in an attempt to see how long players spend from one corner to the next. Ken Hitchcock was obsessed with the same thing. He used something he called “synergy stats” to track zone time. Alaine Vigneault had a ‘sophisticated’ stats package for the New York Rangers. In his stint with Toronto, Randy Carlyle preached a possession game, and sustained pressure — later arguing that there’s “a place for analytics.” The Kris Russell debate in Edmonton saw former GM Peter Chiarelli defend Russell against his detractors with his own fancy stat: tracking data.
Discerning fans are always told to appeal to the authority of men with lanyards. But what gives a coach the authority to interpret statistics? A funny thing happened several years ago in the field of statistics: a stat about stats revealed that people find it harder to interpret data on a pie chart than a bar graph. Scientists are now working on running experiments to learn more about how people analyze and process information(!).
I can’t stress this enough. The actual experts are saying “we need to look at how we analyze information before trying to process it,” and hockey coaches are saying “we know what the most important input is for the stats we need in order to evaluate players.”
Let’s take this way of thinking outside of hockey. Many readers are parents. Imagine a scenario where your child was failing a class, and the teacher made their assessment by arguing that “well, little Jimmy doesn’t work hard enough. He doesn’t show enough effort. He’s become complacent, and has accepted the mediocrity that surrounds him.”
Would you take this teacher seriously? Of course not. You’d rightly ask them for more data. “Can I see Jimmy’s homework? Can I see a timeline of his work, in case we can pinpoint if something happened outside of school to negatively affect his work? Are there behavior reports you have that might explain if something extracurricular is going on? Do you have a journal that logs his work that I can sign in order to catalog his progress?”
There’s a brazen arrogance to what we hear from hockey men about how we understand things that would look foolish in any other context. Why should we suddenly accept it in this context?
Smashing The Echo Chamber
Two things stand out to me as red flags in the “Tracking Data Will Help Us” era.
Firstly, five years ago, Bob McKenzie reported that most teams were “so beyond Corsi that it’s not even a talking point.” Years later, teams would go on to hire many of those who tracked, organized, and emphasized Corsi; it was as if teams already had good data, but didn’t know what to do with it. The second is what rugby has done with their GPS software, which can currently track everything from speed to collision energy loss. Canada’s rugby coach considers this revolutionary because it “gives you huge insight on work rate.”
This is what worries me. Coaches talk about extended zone time, and puck pressure. They talk about shot quality over shot quantity. They talk about creating offense from good defense (if that’s true, why can’t you create defense from good offense?). They talk about size as a skill. But how many of them are actively looking to analyze the ways they process information, or challenge their implicit biases?
I sometimes wonder if part of what bothers coaches about “fancy stats” is that it forces them to rethink what they’ve always known. Just look at some of the more interesting insights in recent years:
- “Getting the puck deep” (read: dump in) has a 25 percent chance of leading to a shot versus a 56 percent chance when carrying the puck in with possession.
- Spending more time in the zone is less likely to lead to a goal — nearly six in 10 goals are scored within seven seconds of zone time.
- A better predictor of keeping goals-against down is not clearing the crease, grabbing rebounds, or being “weighty”, but keeping down shot assists.
- A point shot is as likely to create a rebound that works in your opponent’s favor as a rebound that works in your favor.
- Rebounds convert at a higher rate when chances are created behind the net versus bombing shots extraneously from the point.
- Clearing the puck out of the zone only accounts for the next zone entry attempt 25 percent of the time.
- Face-offs don’t impact shot rates in the long run.
- Teammate quality > quality of competition. A defenseman’s on-ice numbers are more influenced by their defensive partner, followed by their forward opponents. A forward’s on-ice numbers are more influenced by their forward partners, followed by their defensive opponents.
I know. I hear the criticism too: So coaches should preach carrying the puck in at ALL times? They should tell forecheckers to stop pressuring after seven seconds? They should advocate players to stop clearing the crease? Tell defensemen to stop shooting from the point? Tell everyone to always look for the clean pass in their own zone even when it’s not there just to avoid an indirect turnover? Abandon any edge potentially gained from winning face-offs? Play prospects against tough competition instead of proper veterans?
Well, no, no, and no. The data should provoke questions about previously held biases, such as the following: Can I find ways to carry the puck in fast even when I don’t have fast players? Are my forecheckers working more often because they work hard rather than work effectively? Do we run a system where it’s critical for blueliners to clear the crease? Do I have players who can threaten from the point in creative ways? Would this roster be better with Defensemen A who makes one direct mistake every 10 breakouts, but no indirect mistakes or Defensemen B who never has a direct mistakes, but makes five indirect mistakes every 10 breakouts? If I have to choose between two guys with similar production, but dissimilar possession numbers, do I pick the guy with bad possession numbers and great face-off numbers, or the guy with bad face-off numbers, and great possession stats? Should I automatically trust that my veteran blueliner will be better against tough competition, or have I considered that he’s dragging his teammate down when a prospect could actively improve the pair despite the lack of experience?
Teams clearly still have trouble even asking the right questions. If they didn’t, we wouldn’t see David Clarkson offered $37 million over seven years. It’s not like Clarkson’s an anomaly. You can look at literally every team and find a bad contract that could have been easily prevented.
Namita Nandakumar, the Philadelphia Eagles’ quantitative analyst, might have said it best when it comes to what makes “fancy stats” so valuable: “Instead of chasing tactics that pan out intermittently, and often by chance, find the ones statistically expected to work most frequently.”
It’s a quote that sounds familiar to me. The Director of the Human Neuroimaging Lab at Baylor College, Read Montague, was talking about the usual: Darwin, Plato, computational science, and desperation. He’s figured out a convenient definition for efficiency in times of desperation: Efficiency = the best long-term returns from the least immediate investment.
Hits, face-offs, blocked shots, goals, assists, and size. These are immediate investments with obvious returns. They are things with price tags. But what about when you can’t see the price tag? How do you evaluate the things that don’t happen? How do you see the silent plays, where good things happen, but it wasn’t the goal, the pass, or the face-off that helped create them? You don’t. Because you don’t need your eyes to ask the right questions.